Search results
1 – 10 of over 1000Ruchi Kejriwal, Monika Garg and Gaurav Sarin
Stock market has always been lucrative for various investors. But, because of its speculative nature, it is difficult to predict the price movement. Investors have been using both…
Abstract
Purpose
Stock market has always been lucrative for various investors. But, because of its speculative nature, it is difficult to predict the price movement. Investors have been using both fundamental and technical analysis to predict the prices. Fundamental analysis helps to study structured data of the company. Technical analysis helps to study price trends, and with the increasing and easy availability of unstructured data have made it important to study the market sentiment. Market sentiment has a major impact on the prices in short run. Hence, the purpose is to understand the market sentiment timely and effectively.
Design/methodology/approach
The research includes text mining and then creating various models for classification. The accuracy of these models is checked using confusion matrix.
Findings
Out of the six machine learning techniques used to create the classification model, kernel support vector machine gave the highest accuracy of 68%. This model can be now used to analyse the tweets, news and various other unstructured data to predict the price movement.
Originality/value
This study will help investors classify a news or a tweet into “positive”, “negative” or “neutral” quickly and determine the stock price trends.
Details
Keywords
Chi-Un Lei, Wincy Chan and Yuyue Wang
Higher education plays an essential role in achieving the United Nations sustainable development goals (SDGs). However, there are only scattered studies on monitoring how…
Abstract
Purpose
Higher education plays an essential role in achieving the United Nations sustainable development goals (SDGs). However, there are only scattered studies on monitoring how universities promote SDGs through their curriculum. The purpose of this study is to investigate the connection of existing common core courses in a university to SDG education. In particular, this study wanted to know how common core courses can be classified by machine-learning approach according to SDGs.
Design/methodology/approach
In this report, the authors used machine learning techniques to tag the 166 common core courses in a university with SDGs and then analyzed the results based on visualizations. The training data set comes from the OSDG public community data set which the community had verified. Meanwhile, key descriptions of common core courses had been used for the classification. The study used the multinomial logistic regression algorithm for the classification. Descriptive analysis at course-level, theme-level and curriculum-level had been included to illustrate the proposed approach’s functions.
Findings
The results indicate that the machine-learning classification approach can significantly accelerate the SDG classification of courses. However, currently, it cannot replace human classification due to the complexity of the problem and the lack of relevant training data.
Research limitations/implications
The study can achieve a more accurate model training through adopting advanced machine learning algorithms (e.g. deep learning, multioutput multiclass machine learning algorithms); developing a more effective test data set by extracting more relevant information from syllabus and learning materials; expanding the training data set of SDGs that currently have insufficient records (e.g. SDG 12); and replacing the existing training data set from OSDG by authentic education-related documents (such as course syllabus) with SDG classifications. The performance of the algorithm should also be compared to other computer-based and human-based SDG classification approaches for cross-checking the results, with a systematic evaluation framework. Furthermore, the study can be analyzed by circulating results to students and understanding how they would interpret and use the results for choosing courses for studying. Furthermore, the study mainly focused on the classification of topics that are taught in courses but cannot measure the effectiveness of adopted pedagogies, assessment strategies and competency development strategies in courses. The study can also conduct analysis based on assessment tasks and rubrics of courses to see whether the assessment tasks can help students understand and take action on SDGs.
Originality/value
The proposed approach explores the possibility of using machine learning for SDG classifications in scale.
Details
Keywords
Zakaria Sakyoud, Abdessadek Aaroud and Khalid Akodadi
The main goal of this research work is the optimization of the purchasing business process in the Moroccan public sector in terms of transparency and budgetary optimization. The…
Abstract
Purpose
The main goal of this research work is the optimization of the purchasing business process in the Moroccan public sector in terms of transparency and budgetary optimization. The authors have worked on the public university as an implementation field.
Design/methodology/approach
The design of the research work followed the design science research (DSR) methodology for information systems. DSR is a research paradigm wherein a designer answers questions relevant to human problems through the creation of innovative artifacts, thereby contributing new knowledge to the body of scientific evidence. The authors have adopted a techno-functional approach. The technical part consists of the development of an intelligent recommendation system that supports the choice of optimal information technology (IT) equipment for decision-makers. This intelligent recommendation system relies on a set of functional and business concepts, namely the Moroccan normative laws and Control Objectives for Information and Related Technology's (COBIT) guidelines in information system governance.
Findings
The modeling of business processes in public universities is established using business process model and notation (BPMN) in accordance with official regulations. The set of BPMN models constitute a powerful repository not only for business process execution but also for further optimization. Governance generally aims to reduce budgetary wastes, and the authors' recommendation system demonstrates a technical and methodological approach enabling this feature. Implementation of artificial intelligence techniques can bring great value in terms of transparency and fluidity in purchasing business process execution.
Research limitations/implications
Business limitations: First, the proposed system was modeled to handle one type products, which are computer-related equipment. Hence, the authors intend to extend the model to other types of products in future works. Conversely, the system proposes optimal purchasing order and assumes that decision makers will rely on this optimal purchasing order to choose between offers. In fact, as a perspective, the authors plan to work on a complete automation of the workflow to also include vendor selection and offer validation. Technical limitations: Natural language processing (NLP) is a widely used sentiment analysis (SA) technique that enabled the authors to validate the proposed system. Even working on samples of datasets, the authors noticed NLP dependency on huge computing power. The authors intend to experiment with learning and knowledge-based SA and assess the' computing power consumption and accuracy of the analysis compared to NLP. Another technical limitation is related to the web scraping technique; in fact, the users' reviews are crucial for the authors' system. To guarantee timeliness and reliable reviews, the system has to look automatically in websites, which confront the authors with the limitations of the web scraping like the permanent changing of website structure and scraping restrictions.
Practical implications
The modeling of business processes in public universities is established using BPMN in accordance with official regulations. The set of BPMN models constitute a powerful repository not only for business process execution but also for further optimization. Governance generally aims to reduce budgetary wastes, and the authors' recommendation system demonstrates a technical and methodological approach enabling this feature.
Originality/value
The adopted techno-functional approach enabled the authors to bring information system governance from a highly abstract level to a practical implementation where the theoretical best practices and guidelines are transformed to a tangible application.
Details
Keywords
Aslıhan Dursun-Cengizci and Meltem Caber
This study aims to predict customer churn in resort hotels by calculating the churn probability of repeat customers for future stays in the same hotel brand.
Abstract
Purpose
This study aims to predict customer churn in resort hotels by calculating the churn probability of repeat customers for future stays in the same hotel brand.
Design/methodology/approach
Based on the recency, frequency, monetary (RFM) paradigm, random forest and logistic regression supervised machine learning algorithms were used to predict churn behavior. The model with superior performance was used to detect potential churners and generate a priority matrix.
Findings
The random forest algorithm showed a higher prediction performance with an 80% accuracy rate. The most important variables were RFM-based, followed by hotel sector-specific variables such as market, season, accompaniers and booker. Some managerial strategies were proposed to retain future churners, clustered as “hesitant,” “economy,” “alternative seeker,” and “opportunity chaser” customer groups.
Research limitations/implications
This study contributes to the theoretical understanding of customer behavior in the hospitality industry and provides valuable insight for hotel practitioners by demonstrating the methods that facilitate the identification of potential churners and their characteristics.
Originality/value
Most customer retention studies in hospitality either concentrate on the antecedents of retention or customers’ revisit intentions using traditional methods. Taking a unique place within the literature, this study conducts churn prediction analysis for repeat hotel customers by opening a new area for inquiry in hospitality studies.
Details
Keywords
Nicola Castellano, Roberto Del Gobbo and Lorenzo Leto
The concept of productivity is central to performance management and decision-making, although it is complex and multifaceted. This paper aims to describe a methodology based on…
Abstract
Purpose
The concept of productivity is central to performance management and decision-making, although it is complex and multifaceted. This paper aims to describe a methodology based on the use of Big Data in a cluster analysis combined with a data envelopment analysis (DEA) that provides accurate and reliable productivity measures in a large network of retailers.
Design/methodology/approach
The methodology is described using a case study of a leading kitchen furniture producer. More specifically, Big Data is used in a two-step analysis prior to the DEA to automatically cluster a large number of retailers into groups that are homogeneous in terms of structural and environmental factors and assess a within-the-group level of productivity of the retailers.
Findings
The proposed methodology helps reduce the heterogeneity among the units analysed, which is a major concern in DEA applications. The data-driven factorial and clustering technique allows for maximum within-group homogeneity and between-group heterogeneity by reducing subjective bias and dimensionality, which is embedded with the use of Big Data.
Practical implications
The use of Big Data in clustering applied to productivity analysis can provide managers with data-driven information about the structural and socio-economic characteristics of retailers' catchment areas, which is important in establishing potential productivity performance and optimizing resource allocation. The improved productivity indexes enable the setting of targets that are coherent with retailers' potential, which increases motivation and commitment.
Originality/value
This article proposes an innovative technique to enhance the accuracy of productivity measures through the use of Big Data clustering and DEA. To the best of the authors’ knowledge, no attempts have been made to benefit from the use of Big Data in the literature on retail store productivity.
Details
Keywords
In this article, the author discusses works from the French Documentation Movement in the 1940s and 1950s with regard to how it formulates bibliographic classification systems as…
Abstract
Purpose
In this article, the author discusses works from the French Documentation Movement in the 1940s and 1950s with regard to how it formulates bibliographic classification systems as documents. Significant writings by Suzanne Briet, Éric de Grolier and Robert Pagès are analyzed in the light of current document-theoretical concepts and discussions.
Design/methodology/approach
Conceptual analysis.
Findings
The French Documentation Movement provided a rich intellectual environment in the late 1940s and early 1950s, resulting in original works on documents and the ways these may be represented bibliographically. These works display a variety of approaches from object-oriented description to notational concept-synthesis, and definitions of classification systems as isomorph documents at the center of politically informed critique of modern society.
Originality/value
The article brings together historical and conceptual elements in the analysis which have not previously been combined in Library and Information Science literature. In the analysis, the article discusses significant contributions to classification and document theory that hitherto have eluded attention from the wider international Library and Information Science research community. Through this, the article contributes to the currently ongoing conceptual discussion on documents and documentality.
Details
Keywords
Koraljka Golub, Osma Suominen, Ahmed Taiye Mohammed, Harriet Aagaard and Olof Osterman
In order to estimate the value of semi-automated subject indexing in operative library catalogues, the study aimed to investigate five different automated implementations of an…
Abstract
Purpose
In order to estimate the value of semi-automated subject indexing in operative library catalogues, the study aimed to investigate five different automated implementations of an open source software package on a large set of Swedish union catalogue metadata records, with Dewey Decimal Classification (DDC) as the target classification system. It also aimed to contribute to the body of research on aboutness and related challenges in automated subject indexing and evaluation.
Design/methodology/approach
On a sample of over 230,000 records with close to 12,000 distinct DDC classes, an open source tool Annif, developed by the National Library of Finland, was applied in the following implementations: lexical algorithm, support vector classifier, fastText, Omikuji Bonsai and an ensemble approach combing the former four. A qualitative study involving two senior catalogue librarians and three students of library and information studies was also conducted to investigate the value and inter-rater agreement of automatically assigned classes, on a sample of 60 records.
Findings
The best results were achieved using the ensemble approach that achieved 66.82% accuracy on the three-digit DDC classification task. The qualitative study confirmed earlier studies reporting low inter-rater agreement but also pointed to the potential value of automatically assigned classes as additional access points in information retrieval.
Originality/value
The paper presents an extensive study of automated classification in an operative library catalogue, accompanied by a qualitative study of automated classes. It demonstrates the value of applying semi-automated indexing in operative information retrieval systems.
Details
Keywords
Elavaar Kuzhali S. and Pushpa M.K.
COVID-19 has occurred in more than 150 countries and causes a huge impact on the health of many people. The main purpose of this work is, COVID-19 has occurred in more than 150…
Abstract
Purpose
COVID-19 has occurred in more than 150 countries and causes a huge impact on the health of many people. The main purpose of this work is, COVID-19 has occurred in more than 150 countries and causes a huge impact on the health of many people. The COVID-19 diagnosis is required to detect at the beginning stage and special attention should be given to them. The fastest way to detect the COVID-19 infected patients is detecting through radiology and radiography images. The few early studies describe the particular abnormalities of the infected patients in the chest radiograms. Even though some of the challenges occur in concluding the viral infection traces in X-ray images, the convolutional neural network (CNN) can determine the patterns of data between the normal and infected X-rays that increase the detection rate. Therefore, the researchers are focusing on developing a deep learning-based detection model.
Design/methodology/approach
The main intention of this proposal is to develop the enhanced lung segmentation and classification of diagnosing the COVID-19. The main processes of the proposed model are image pre-processing, lung segmentation and deep classification. Initially, the image enhancement is performed by contrast enhancement and filtering approaches. Once the image is pre-processed, the optimal lung segmentation is done by the adaptive fuzzy-based region growing (AFRG) technique, in which the constant function for fusion is optimized by the modified deer hunting optimization algorithm (M-DHOA). Further, a well-performing deep learning algorithm termed adaptive CNN (A-CNN) is adopted for performing the classification, in which the hidden neurons are tuned by the proposed DHOA to enhance the detection accuracy. The simulation results illustrate that the proposed model has more possibilities to increase the COVID-19 testing methods on the publicly available data sets.
Findings
From the experimental analysis, the accuracy of the proposed M-DHOA–CNN was 5.84%, 5.23%, 6.25% and 8.33% superior to recurrent neural network, neural networks, support vector machine and K-nearest neighbor, respectively. Thus, the segmentation and classification performance of the developed COVID-19 diagnosis by AFRG and A-CNN has outperformed the existing techniques.
Originality/value
This paper adopts the latest optimization algorithm called M-DHOA to improve the performance of lung segmentation and classification in COVID-19 diagnosis using adaptive K-means with region growing fusion and A-CNN. To the best of the authors’ knowledge, this is the first work that uses M-DHOA for improved segmentation and classification steps for increasing the convergence rate of diagnosis.
Details
Keywords
Research on artificial intelligence (AI) and its potential effects on the workplace is increasing. How AI and the futures of work are framed in traditional media has been examined…
Abstract
Purpose
Research on artificial intelligence (AI) and its potential effects on the workplace is increasing. How AI and the futures of work are framed in traditional media has been examined in prior studies, but current research has not gone far enough in examining how AI is framed on social media. This paper aims to fill this gap by examining how people frame the futures of work and intelligent machines when they post on social media.
Design/methodology/approach
We investigate public interpretations, assumptions and expectations, referring to framing expressed in social media conversations. We also coded the emotions and attitudes expressed in the text data. A corpus consisting of 998 unique Reddit post titles and their corresponding 16,611 comments was analyzed using computer-aided textual analysis comprising a BERTopic model and two BERT text classification models, one for emotion and the other for sentiment analysis, supported by human judgment.
Findings
Different interpretations, assumptions and expectations were found in the conversations. Three subframes were analyzed in detail under the overarching frame of the New World of Work: (1) general impacts of intelligent machines on society, (2) undertaking of tasks (augmentation and substitution) and (3) loss of jobs. The general attitude observed in conversations was slightly positive, and the most common emotion category was curiosity.
Originality/value
Findings from this research can uncover public needs and expectations regarding the future of work with intelligent machines. The findings may also help shape research directions about futures of work. Furthermore, firms, organizations or industries may employ framing methods to analyze customers’ or workers’ responses or even influence the responses. Another contribution of this work is the application of framing theory to interpreting how people conceptualize the future of work with intelligent machines.
Details