Search results
1 – 10 of 150Peiman Tavakoli, Ibrahim Yitmen, Habib Sadri and Afshin Taheri
The purpose of this study is to focus on structured data provision and asset information model maintenance and develop a data provenance model on a blockchain-based digital twin…
Abstract
Purpose
The purpose of this study is to focus on structured data provision and asset information model maintenance and develop a data provenance model on a blockchain-based digital twin smart and sustainable built environment (DT) for predictive asset management (PAM) in building facilities.
Design/methodology/approach
Qualitative research data were collected through a comprehensive scoping review of secondary sources. Additionally, primary data were gathered through interviews with industry specialists. The analysis of the data served as the basis for developing blockchain-based DT data provenance models and scenarios. A case study involving a conference room in an office building in Stockholm was conducted to assess the proposed data provenance model. The implementation utilized the Remix Ethereum platform and Sepolia testnet.
Findings
Based on the analysis of results, a data provenance model on blockchain-based DT which ensures the reliability and trustworthiness of data used in PAM processes was developed. This was achieved by providing a transparent and immutable record of data origin, ownership and lineage.
Practical implications
The proposed model enables decentralized applications (DApps) to publish real-time data obtained from dynamic operations and maintenance processes, enhancing the reliability and effectiveness of data for PAM.
Originality/value
The research presents a data provenance model on a blockchain-based DT, specifically tailored to PAM in building facilities. The proposed model enhances decision-making processes related to PAM by ensuring data reliability and trustworthiness and providing valuable insights for specialists and stakeholders interested in the application of blockchain technology in asset management and data provenance.
Details
Keywords
Harleen Kaur and Vinita Kumari
Diabetes is a major metabolic disorder which can affect entire body system adversely. Undiagnosed diabetes can increase the risk of cardiac stroke, diabetic nephropathy and other…
Abstract
Diabetes is a major metabolic disorder which can affect entire body system adversely. Undiagnosed diabetes can increase the risk of cardiac stroke, diabetic nephropathy and other disorders. All over the world millions of people are affected by this disease. Early detection of diabetes is very important to maintain a healthy life. This disease is a reason of global concern as the cases of diabetes are rising rapidly. Machine learning (ML) is a computational method for automatic learning from experience and improves the performance to make more accurate predictions. In the current research we have utilized machine learning technique in Pima Indian diabetes dataset to develop trends and detect patterns with risk factors using R data manipulation tool. To classify the patients into diabetic and non-diabetic we have developed and analyzed five different predictive models using R data manipulation tool. For this purpose we used supervised machine learning algorithms namely linear kernel support vector machine (SVM-linear), radial basis function (RBF) kernel support vector machine, k-nearest neighbour (k-NN), artificial neural network (ANN) and multifactor dimensionality reduction (MDR).
Details
Keywords
Robin K. Chou, Kuan-Cheng Ko and S. Ghon Rhee
National cultures significantly explain cross-country differences in the relation between asset growth and stock returns. Motivated by the notion that managers in individualistic…
Abstract
National cultures significantly explain cross-country differences in the relation between asset growth and stock returns. Motivated by the notion that managers in individualistic and low uncertainty-avoiding cultures have a higher tendency to overinvest, this study aims to show that the negative relation between asset growth and stock returns is stronger in countries with such cultural features. Once the researchers control for cultural dimensions, proxies associated with the q-theory, limits-to-arbitrage, corporate governance, investor protection and accounting quality provide no incremental power for the relation between asset growth and stock returns across countries. Evidence of this study highlights the importance of the overinvestment hypothesis in explaining the asset growth anomaly around the world.
Details
Keywords
Kuang Junwei, Hangzhou Yang, Liu Junjiang and Yan Zhijun
Previous dynamic prediction models rarely handle multi-period data with different intervals, and the large-scale patient hospital records are not effectively used to improve the…
Abstract
Purpose
Previous dynamic prediction models rarely handle multi-period data with different intervals, and the large-scale patient hospital records are not effectively used to improve the prediction performance. This paper aims to focus on the prediction of cardiovascular disease using the improved long short-term memory (LSTM) model.
Design/methodology/approach
A new model based on the traditional LSTM was proposed to predict cardiovascular disease. The irregular time interval is smoothed to obtain the time parameter vector, and it is used as the input of the forgetting gate of LSTM to overcome the prediction obstacle caused by the irregular time interval.
Findings
The experimental results show that the dynamic prediction model proposed in this paper obtained a significant better classification performance compared with the traditional LSTM model.
Originality/value
In this paper, the authors improved the LSTM by smoothing the irregular time between different medical stages of the patient to obtain the temporal feature vector.
Details
Keywords
Adela Sobotkova, Ross Deans Kristensen-McLachlan, Orla Mallon and Shawn Adrian Ross
This paper provides practical advice for archaeologists and heritage specialists wishing to use ML approaches to identify archaeological features in high-resolution satellite…
Abstract
Purpose
This paper provides practical advice for archaeologists and heritage specialists wishing to use ML approaches to identify archaeological features in high-resolution satellite imagery (or other remotely sensed data sources). We seek to balance the disproportionately optimistic literature related to the application of ML to archaeological prospection through a discussion of limitations, challenges and other difficulties. We further seek to raise awareness among researchers of the time, effort, expertise and resources necessary to implement ML successfully, so that they can make an informed choice between ML and manual inspection approaches.
Design/methodology/approach
Automated object detection has been the holy grail of archaeological remote sensing for the last two decades. Machine learning (ML) models have proven able to detect uniform features across a consistent background, but more variegated imagery remains a challenge. We set out to detect burial mounds in satellite imagery from a diverse landscape in Central Bulgaria using a pre-trained Convolutional Neural Network (CNN) plus additional but low-touch training to improve performance. Training was accomplished using MOUND/NOT MOUND cutouts, and the model assessed arbitrary tiles of the same size from the image. Results were assessed using field data.
Findings
Validation of results against field data showed that self-reported success rates were misleadingly high, and that the model was misidentifying most features. Setting an identification threshold at 60% probability, and noting that we used an approach where the CNN assessed tiles of a fixed size, tile-based false negative rates were 95–96%, false positive rates were 87–95% of tagged tiles, while true positives were only 5–13%. Counterintuitively, the model provided with training data selected for highly visible mounds (rather than all mounds) performed worse. Development of the model, meanwhile, required approximately 135 person-hours of work.
Research limitations/implications
Our attempt to deploy a pre-trained CNN demonstrates the limitations of this approach when it is used to detect varied features of different sizes within a heterogeneous landscape that contains confounding natural and modern features, such as roads, forests and field boundaries. The model has detected incidental features rather than the mounds themselves, making external validation with field data an essential part of CNN workflows. Correcting the model would require refining the training data as well as adopting different approaches to model choice and execution, raising the computational requirements beyond the level of most cultural heritage practitioners.
Practical implications
Improving the pre-trained model’s performance would require considerable time and resources, on top of the time already invested. The degree of manual intervention required – particularly around the subsetting and annotation of training data – is so significant that it raises the question of whether it would be more efficient to identify all of the mounds manually, either through brute-force inspection by experts or by crowdsourcing the analysis to trained – or even untrained – volunteers. Researchers and heritage specialists seeking efficient methods for extracting features from remotely sensed data should weigh the costs and benefits of ML versus manual approaches carefully.
Social implications
Our literature review indicates that use of artificial intelligence (AI) and ML approaches to archaeological prospection have grown exponentially in the past decade, approaching adoption levels associated with “crossing the chasm” from innovators and early adopters to the majority of researchers. The literature itself, however, is overwhelmingly positive, reflecting some combination of publication bias and a rhetoric of unconditional success. This paper presents the failure of a good-faith attempt to utilise these approaches as a counterbalance and cautionary tale to potential adopters of the technology. Early-majority adopters may find ML difficult to implement effectively in real-life scenarios.
Originality/value
Unlike many high-profile reports from well-funded projects, our paper represents a serious but modestly resourced attempt to apply an ML approach to archaeological remote sensing, using techniques like transfer learning that are promoted as solutions to time and cost problems associated with, e.g. annotating and manipulating training data. While the majority of articles uncritically promote ML, or only discuss how challenges were overcome, our paper investigates how – despite reasonable self-reported scores – the model failed to locate the target features when compared to field data. We also present time, expertise and resourcing requirements, a rarity in ML-for-archaeology publications.
Details
Keywords
Ivan Soukal, Jan Mačí, Gabriela Trnková, Libuse Svobodova, Martina Hedvičáková, Eva Hamplova, Petra Maresova and Frank Lefley
The primary purpose of this paper is to identify the so-called core authors and their publications according to pre-defined criteria and thereby direct the users to the fastest…
Abstract
Purpose
The primary purpose of this paper is to identify the so-called core authors and their publications according to pre-defined criteria and thereby direct the users to the fastest and easiest way to get a picture of the otherwise pervasive field of bankruptcy prediction models. The authors aim to present state-of-the-art bankruptcy prediction models assembled by the field's core authors and critically examine the approaches and methods adopted.
Design/methodology/approach
The authors conducted a literature search in November 2022 through scientific databases Scopus, ScienceDirect and the Web of Science, focussing on a publication period from 2010 to 2022. The database search query was formulated as “Bankruptcy Prediction” and “Model or Tool”. However, the authors intentionally did not specify any model or tool to make the search non-discriminatory. The authors reviewed over 7,300 articles.
Findings
This paper has addressed the research questions: (1) What are the most important publications of the core authors in terms of the target country, size of the sample, sector of the economy and specialization in SME? (2) What are the most used methods for deriving or adjusting models appearing in the articles of the core authors? (3) To what extent do the core authors include accounting-based variables, non-financial or macroeconomic indicators, in their prediction models? Despite the advantages of new-age methods, based on the information in the articles analyzed, it can be deduced that conventional methods will continue to be beneficial, mainly due to the higher degree of ease of use and the transferability of the derived model.
Research limitations/implications
The authors identify several gaps in the literature which this research does not address but could be the focus of future research.
Practical implications
The authors provide practitioners and academics with an extract from a wide range of studies, available in scientific databases, on bankruptcy prediction models or tools, resulting in a large number of records being reviewed. This research will interest shareholders, corporations, and financial institutions interested in models of financial distress prediction or bankruptcy prediction to help identify troubled firms in the early stages of distress.
Social implications
Bankruptcy is a major concern for society in general, especially in today's economic environment. Therefore, being able to predict possible business failure at an early stage will give an organization time to address the issue and maybe avoid bankruptcy.
Originality/value
To the authors' knowledge, this is the first paper to identify the core authors in the bankruptcy prediction model and methods field. The primary value of the study is the current overview and analysis of the theoretical and practical development of knowledge in this field in the form of the construction of new models using classical or new-age methods. Also, the paper adds value by critically examining existing models and their modifications, including a discussion of the benefits of non-accounting variables usage.
Details
Keywords
This paper aims to introduce a crowd-based method for theorizing. The purpose is not to achieve a scientific theory. On the contrary, the purpose is to achieve a model that may…
Abstract
Purpose
This paper aims to introduce a crowd-based method for theorizing. The purpose is not to achieve a scientific theory. On the contrary, the purpose is to achieve a model that may challenge current scientific theories or lead research in new phenomena.
Design/methodology/approach
This paper describes a case study of theorizing by using a crowd-based method. The first section of the paper introduces what do the authors know about crowdsourcing, crowd science and the aggregation of non-expert views. The second section details the case study. The third section analyses the aggregation. Finally, the fourth section elaborates the conclusions, limitations and future research.
Findings
This document answers to what extent the crowd-based method produces similar results to theories tested and published by experts.
Research limitations/implications
From a theoretical perspective, this study provides evidence to support the research agenda associated with crowd science. The main limitation of this study is that the crowded research models and the expert research models are compared in terms of the graph. Nevertheless, some academics may argue that theory building is about an academic heritage.
Practical implications
This paper exemplifies how to obtain an expert-level research model by aggregating the views of non-experts.
Social implications
This study is particularly important for institutions with limited access to costly databases, labs and researchers.
Originality/value
Previous research suggested that a collective of individuals may help to conduct all the stages of a research endeavour. Nevertheless, a formal method for theorizing based on the aggregation of non-expert views does not exist. This paper provides the method and evidence of its practical implications.
Details
Keywords
Sofía Blanco-Moreno, Ana M. González-Fernández and Pablo Antonio Muñoz-Gallego
The purpose of this study was to uncover representative emergent areas and to examine the research area of marketing, tourism and big data (BD) to assess how these thematic areas…
Abstract
Purpose
The purpose of this study was to uncover representative emergent areas and to examine the research area of marketing, tourism and big data (BD) to assess how these thematic areas have developed over a 27-year time period from 1996 to 2022. This study analyzed 1,152 studies to identify the principal thematic areas and emergent topics, principal theories used, predominant forms of analysis and the most productive authors in terms of research.
Design/methodology/approach
The articles for this research were all selected from the Web of Science database. A systematic and quantitative literature review was performed. This study used SciMAT software to extract indicators. Specifically, this study analyzed productivity and produced a science map.
Findings
The findings suggest that interest in this area has increased gradually. The outputs also reveal the innovative effort of industry in new technologies for developing models for tourism marketing. Ten research areas were identified: “destination marketing,” “mobility patterns,” “co-creation,” “gastronomy,” “sustainability,” “tourist behavior,” “market segmentation,” “artificial neural networks,” “pricing” and “tourist satisfaction.”
Originality/value
This work is unique in proposing an agenda for future research into tourism marketing research with new technologies such as BD and artificial intelligence techniques. In addition, the results presented here fill the current gap in the research since while there have been literature reviews covering tourism with BD or marketing, these areas have not been studied as a whole.
Propósito
El objetivo de esta investigación fue descubrir nichos representativos de áreas emergentes y examinar el área de Marketing, Turismo y Big Data, evaluando cómo han evolucionado estas áreas temáticas durante un período de 27 años desde 1996–2022. Analizamos 1.152 investigaciones para identificar las principales áreas temáticas y temas emergentes, las principales teorías utilizadas, las formas de análisis predominantes y los autores más productivos en términos de investigación.
Metodología
Todos los artículos para esta investigación fueron seleccionados de la base de datos Web of Science. Realizamos una revisión sistemática y cuantitativa de la literatura. Utilizamos el software SciMAT para extraer indicadores. Específicamente, analizamos la productividad y elaboramos un mapeo científico.
Hallazgos
Los hallazgos sugieren que el interés en esta área ha aumentado gradualmente. Los resultados también revelan el esfuerzo innovador de la industria en nuevas tecnologías para desarrollar modelos de marketing turístico. Se identificaron diez áreas de investigación (“marketing de destinos”, “patrones de movilidad”, “co-creación”, “gastronomía”, “sostenibilidad”, “comportamiento turístico”, “segmentación de mercado”, “redes neuronales artificiales”, “precios”, y “satisfacción del turista”).
Valor
Este trabajo es único al proponer una agenda para futuras investigaciones en investigación de Marketing Turístico con nuevas tecnologías como Big Data y técnicas de Inteligencia Artificial. Además, los resultados presentados aquí llenan el vacío actual en la investigación ya que si bien se han realizado revisiones de literatura que cubren Turismo con Big Data o Marketing, estas áreas no se han estudiado como un conjunto.
目的
这一特定研究领域的目标是发现具有代表性的新兴领域, 并考察市场营销、旅游和大数据研究领域, 以评估这些主题领域在1996年至2022年的27年间是如何发展的。我们分析了1152项研究, 以确定主要专题领域和新兴主题、使用的主要理论、主要的分析形式以及在研究方面最有成效的作者。
方法
本研究的文章都是从Web of Science数据库中选出的。我们进行了系统化的定量文献审查, 并使用SciMAT软件来提取指标。具体来说, 我们分析了生产力并制作了一个科学研究地图。
研究结果
研究结果表明, 人们对这一领域的兴趣已经逐渐增加。本文也揭示了工业界在开发旅游营销模式的新技术方面的创新努力。研究确定了十个研究领域:“目的地营销”、“流动模式”、“共同创造”、“美食”、“可持续性”、“游客行为”、“市场细分”、“人工神经网络”、“定价 “和游客满意度”。
原创性
这项研究的独特之处在于提出了未来利用大数据和人工智能技术等新技术进行旅游营销研究的议程。此外, 本文的结果填补了目前的研究空白, 因为虽然有文献综述涉及旅游与大数据或市场营销, 但这些领域还没有被作为一个整体来研究。
Details
Keywords
Junghee Han and Chang-min Park
This paper aims at investigating the role of institutional entrepreneurship and corporate entrepreneurship to cope with firm’ impasses by adoption of the new technology ahead of…
Abstract
Purpose
This paper aims at investigating the role of institutional entrepreneurship and corporate entrepreneurship to cope with firm’ impasses by adoption of the new technology ahead of other firms. Also, this paper elucidates the importance of own specific institutional and corporate entrepreneurship created from firm’s norm.
Design/methodology/approach
The utilized research frame is as follows: first, perspective of studies on institutional and corporate entrepreneurship are performed using prior literature and preliminary references; second, analytical research frame was proposed; finally, phase-based cases are conducted so as to identify research objective.
Findings
Kumho Tire was the first tire manufacturer in the world to exploit the utilization of radio-frequency identification for passenger carâ’s tire. Kumho Tire takes great satisfaction in lots of failures to develop the cutting edge technology using advanced information and communication technology cultivated by heterogeneous institution and corporate entrepreneurship.
Originality/value
The firm concentrated its resources into building the organization’s communication process and enhancing the quality of its human resources from the early stages of their birth so as to create distinguishable corporate entrepreneurship.
Details
Keywords
Daniel Šandor and Marina Bagić Babac
Sarcasm is a linguistic expression that usually carries the opposite meaning of what is being said by words, thus making it difficult for machines to discover the actual meaning…
Abstract
Purpose
Sarcasm is a linguistic expression that usually carries the opposite meaning of what is being said by words, thus making it difficult for machines to discover the actual meaning. It is mainly distinguished by the inflection with which it is spoken, with an undercurrent of irony, and is largely dependent on context, which makes it a difficult task for computational analysis. Moreover, sarcasm expresses negative sentiments using positive words, allowing it to easily confuse sentiment analysis models. This paper aims to demonstrate the task of sarcasm detection using the approach of machine and deep learning.
Design/methodology/approach
For the purpose of sarcasm detection, machine and deep learning models were used on a data set consisting of 1.3 million social media comments, including both sarcastic and non-sarcastic comments. The data set was pre-processed using natural language processing methods, and additional features were extracted and analysed. Several machine learning models, including logistic regression, ridge regression, linear support vector and support vector machines, along with two deep learning models based on bidirectional long short-term memory and one bidirectional encoder representations from transformers (BERT)-based model, were implemented, evaluated and compared.
Findings
The performance of machine and deep learning models was compared in the task of sarcasm detection, and possible ways of improvement were discussed. Deep learning models showed more promise, performance-wise, for this type of task. Specifically, a state-of-the-art model in natural language processing, namely, BERT-based model, outperformed other machine and deep learning models.
Originality/value
This study compared the performance of the various machine and deep learning models in the task of sarcasm detection using the data set of 1.3 million comments from social media.
Details