Search results

1 – 10 of over 12000
Content available
Article
Publication date: 14 December 2020

Enna Hirata, Maria Lambrou and Daisuke Watanabe

This paper aims to retrieve key components of blockchain applications in supply chain areas. It applies natural language processing methods to generate useful insights from…

4962

Abstract

Purpose

This paper aims to retrieve key components of blockchain applications in supply chain areas. It applies natural language processing methods to generate useful insights from academic literature.

Design/methodology/approach

It first applies a text mining method to retrieve information from scientific journal papers on the related topics. The text information is then analyzed through machine learning (ML) models to identify the important implications from the existing literature.

Findings

The research findings are three-fold. While challenges are of concern, the focus should be given to the design and implementation of blockchain in the supply chain field. Integration with internet of things is considered to be of higher importance. Blockchain plays a crucial role in food sustainability.

Research limitations/implications

The research findings offer insights for both policymakers and business managers on blockchain implementation in the supply chain.

Practical implications

This paper exemplifies the model as situated in the interface of human-based and machine-learned analysis, potentially offering an interesting and relevant avenue for blockchain and supply chain management researchers.

Originality/value

To the best of the knowledge, the research is the very first attempt to apply ML algorithms to analyzing the full contents of blockchain-related research, in the supply chain sector, thereby providing new insights and complementing existing literature.

Details

Maritime Business Review, vol. 6 no. 2
Type: Research Article
ISSN: 2397-3757

Keywords

Content available
Article
Publication date: 19 August 2022

Enrico D'agostini

This study explores the levels of Facebook engagement of the two largest Europe-based shipping lines, Maersk and Mediterranean Shipping Company (MSC), to discover the marketing…

1940

Abstract

Purpose

This study explores the levels of Facebook engagement of the two largest Europe-based shipping lines, Maersk and Mediterranean Shipping Company (MSC), to discover the marketing orientation of the topics advertised and to ascertain whether they tend to be about brand recognition, new transport services, or value propositions for stakeholders.

Design/methodology/approach

The Facebook posts of Maersk and MSC were analysed using social media text mining and social network analysis (SNA); in- and out-degree centrality analysis was performed to determine the key terms in their posts. NetMiner software was used to collect the respective data on Maersk and MSC. The inquiry period was set between May 2020 and February 2021.

Findings

The results indicated a divergence in their post contents, with higher engagement and a wider, more active follower base for MSC than for Maersk. Maersk primarily posts about logistics services and supply chain solutions. MSC communicates about new and large container vessels. Both companies seek greater brand recognition and information sharing through social media.

Originality/value

These results can be used by the stakeholders to evaluate whether Maersk and MSC truly deliver on their respective value propositions communicated online through their social media engagement. It can also help Maersk and MSC gauge the level of effectiveness of their communication with stakeholders and modify their digital engagement strategy accordingly.

Details

Maritime Business Review, vol. 8 no. 3
Type: Research Article
ISSN: 2397-3757

Keywords

Open Access
Book part
Publication date: 4 May 2018

Nurlaila, Syahron Lubis, Tengku Sylvana Sinar and Muhizar Muchtar

Purpose – This paper is aimed at describing semantics equivalence of cultural terms in meurukon texts translated from Acehnese into Indonesian. A qualitative descriptive approach…

Abstract

Purpose – This paper is aimed at describing semantics equivalence of cultural terms in meurukon texts translated from Acehnese into Indonesian. A qualitative descriptive approach is used to analyze the context of semantics equivalence in these texts: varied semantics structure, especially the ones caused by the cultural gap between the two languages.

Design/Methodology/Approach – This research is designed to be of qualitative descriptive nature, wherein data are documented and analyzed using various methods proposed by Miles, Huberman, and Saldana (2014), such as data condensation, data display, drawing and verifying conclusions. The researcher is considered the key instrument in the whole process. The source of the data collected is from meurukon texts and its translation that consists of 623 sentences: they mainly comprise words and phrases that contain semantics equivalence of cultural terms.

Findings – The result of the research shows that there are 129 cultural terms found in 623 sentences. Of the analyzed data, it is seen that only 16.66% of the data is not equivalent with the target text, while 83.34% words and phrases of meurukon text are equivalent. This suggests that as a result of translation, the meurukon text has high semantics or lexical equivalences with the target text.

Research Limitations/Implications – This research is focused on semantics equivalence found in meurukon texts. The semantic equivalence here only pertains to lexical meaning of nouns and adjectives by using componential analysis.

Practical Implications – The result can be used in a sample of ways for the analysis of semantics equivalence of cultural terms in meurukon text translated from Acehnese into Indonesian using componential analysis.

Originality/Value – This research identifies meurukon as an oral tradition of Acehnese culture, which is in the question and answer format about Islamic law in Aceh, specifically North Aceh.

Details

Proceedings of MICoMS 2017
Type: Book
ISBN:

Keywords

Content available
Article
Publication date: 25 October 2021

Enna Hirata and Takuma Matsuda

This research aims to uncover coronavirus disease 2019’s (COVID-19's) impact on shipping and logistics using Internet articles as the source.

4616

Abstract

Purpose

This research aims to uncover coronavirus disease 2019’s (COVID-19's) impact on shipping and logistics using Internet articles as the source.

Design/methodology/approach

This research applies web mining to collect information on COVID-19's impact on shipping and logistics from Internet articles. The information extracted is then analyzed through machine learning algorithms for useful insights.

Findings

The research results indicate that the recovery of the global supply chain in China could potentially drive the global supply chain to return to normalcy. In addition, researchers and policymakers should prioritize two aspects: (1) Ease of cross-border trade and logistics. Digitization of the supply chain and applying breakthrough technologies like blockchain and IoT are needed more than ever before. (2) Supply chain resilience. The high dependency of the global supply chain on China sounds like an alarm of supply chain resilience. It calls for a framework to increase global supply chain resilience that enables quick recovery from disruptions in the long term.

Originality/value

Differing from other studies taking the natural language processing (NLP) approach, this research uses Internet articles as the data source. The findings reveal significant components of COVID-19's impact on shipping and logistics, highlighting crucial agendas for scholars to research.

Details

Maritime Business Review, vol. 7 no. 4
Type: Research Article
ISSN: 2397-3757

Keywords

Open Access
Article
Publication date: 15 December 2022

Frida Nyqvist and Eva-Lena Lundgren-Henriksson

The purpose of this research is to explore how an industry is represented in multimodal public media narratives and to explore how this representation subsequently affects the…

1974

Abstract

Purpose

The purpose of this research is to explore how an industry is represented in multimodal public media narratives and to explore how this representation subsequently affects the formation of public sense-giving space during a persisting crisis, such as a pandemic. The question asked is: how do the use of multimodality by public service media dynamically shape representations of industry identity during a persisting crisis?

Design/methodology/approach

This study made use of a multimodal approach. The verbal and visual media text on the restaurant industry during the COVID-19 pandemic that were published in Finland by the public service media distributor Yle were studied. Data published between March 2020 and March 2022 were analysed. The data consisted of 236 verbal texts, including 263 visuals.

Findings

Three narratives were identified– victim, servant and survivor – that construct power relations and depict the identity of the restaurant industry differently. It was argued that multimodal media narratives hold three meaning making functions: sentimentalizing, juxtaposing and nuancing industry characteristics. It was also argued that multimodal public service media narratives have wider implications in possibly shaping the future attractiveness of the industry and organizational members' understanding of their identity.

Originality/value

This research contributes to sensemaking literature in that it explores the role of power – explicitly or implicitly constructed through media narratives during crisis. Furthermore, this research contributes to sensemaking literature in that it shows how narratives take shape multimodally during a continuous crisis, and how this impacts the construction of industry identity.

Details

Journal of Organizational Change Management, vol. 36 no. 8
Type: Research Article
ISSN: 0953-4814

Keywords

Open Access
Article
Publication date: 5 December 2022

Carolin Ischen, Theo B. Araujo, Hilde A.M. Voorveld, Guda Van Noort and Edith G. Smit

Virtual assistants are increasingly used for persuasive purposes, employing the different modalities of voice and text (or a combination of the two). In this study, the authors…

2728

Abstract

Purpose

Virtual assistants are increasingly used for persuasive purposes, employing the different modalities of voice and text (or a combination of the two). In this study, the authors compare the persuasiveness of voice-and text-based virtual assistants. The authors argue for perceived human-likeness and cognitive load as underlying mechanisms that can explain why voice- and text-based assistants differ in their persuasive potential by suppressing the activation of consumers' persuasion knowledge.

Design/methodology/approach

A pre-registered online-experiment (n = 450) implemented a text-based and two voice-based (with and without interaction history displayed in text) virtual assistants.

Findings

Findings show that, contrary to expectations, a text-based assistant is perceived as more human-like compared to a voice-based assistant (regardless of whether the interaction history is displayed), which in turn positively influences brand attitudes and purchase intention. The authors also find that voice as a communication modality can increase persuasion knowledge by being cognitively more demanding in comparison to text.

Practical implications

Simply using voice as a presumably human cue might not suffice to give virtual assistants a human-like appeal. For the development of virtual assistants, it might be beneficial to actively engage consumers to increase awareness of persuasion.

Originality/value

The current study adds to the emergent research stream considering virtual assistants in explicitly exploring modality differences between voice and text (and a combination of the two) and provides insights into the effects of persuasion coming from virtual assistants.

Details

Internet Research, vol. 32 no. 7
Type: Research Article
ISSN: 1066-2243

Keywords

Open Access
Article
Publication date: 20 February 2024

Alenka Kavčič Čolić and Andreja Hari

The current predominant delivery format resulting from digitization is PDF, which is not appropriate for the blind, partially sighted and people who read on mobile devices. To…

Abstract

Purpose

The current predominant delivery format resulting from digitization is PDF, which is not appropriate for the blind, partially sighted and people who read on mobile devices. To meet the needs of both communities, as well as broader ones, alternative file formats are required. With the findings of the eBooks-On-Demand-Network Opening Publications for European Netizens project research, this study aims to improve access to digitized content for these communities.

Design/methodology/approach

In 2022, the authors conducted research on the digitization experiences of 13 EODOPEN partners at their organizations. The authors distributed the same sample of scans in English with different characteristics, and in accordance with Web content accessibility guidelines, the authors created 24 criteria to analyze their digitization workflows, output formats and optical character recognition (OCR) quality.

Findings

In this contribution, the authors present the results of a trial implementation among EODOPEN partners regarding their digitization workflows, used delivery file formats and the resulting quality of OCR results, depending on the type of digitization output file format. It was shown that partners using the OCR tool ABBYY FineReader Professional and producing scanning outputs in tagged PDF and PDF/UA formats achieved better results according to set criteria.

Research limitations/implications

The trial implementations were limited to 13 project partners’ organizations only.

Originality/value

This research paper can be a valuable contribution to the field of massive digitization practices, particularly in terms of improving the accessibility of the output delivery file formats.

Details

Digital Library Perspectives, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 2059-5816

Keywords

Open Access
Article
Publication date: 5 March 2021

Xuan Ji, Jiachen Wang and Zhijun Yan

Stock price prediction is a hot topic and traditional prediction methods are usually based on statistical and econometric models. However, these models are difficult to deal with…

16648

Abstract

Purpose

Stock price prediction is a hot topic and traditional prediction methods are usually based on statistical and econometric models. However, these models are difficult to deal with nonstationary time series data. With the rapid development of the internet and the increasing popularity of social media, online news and comments often reflect investors’ emotions and attitudes toward stocks, which contains a lot of important information for predicting stock price. This paper aims to develop a stock price prediction method by taking full advantage of social media data.

Design/methodology/approach

This study proposes a new prediction method based on deep learning technology, which integrates traditional stock financial index variables and social media text features as inputs of the prediction model. This study uses Doc2Vec to build long text feature vectors from social media and then reduce the dimensions of the text feature vectors by stacked auto-encoder to balance the dimensions between text feature variables and stock financial index variables. Meanwhile, based on wavelet transform, the time series data of stock price is decomposed to eliminate the random noise caused by stock market fluctuation. Finally, this study uses long short-term memory model to predict the stock price.

Findings

The experiment results show that the method performs better than all three benchmark models in all kinds of evaluation indicators and can effectively predict stock price.

Originality/value

In this paper, this study proposes a new stock price prediction model that incorporates traditional financial features and social media text features which are derived from social media based on deep learning technology.

Details

International Journal of Crowd Science, vol. 5 no. 1
Type: Research Article
ISSN: 2398-7294

Keywords

Open Access
Article
Publication date: 8 December 2020

Matjaž Kragelj and Mirjana Kljajić Borštnar

The purpose of this study is to develop a model for automated classification of old digitised texts to the Universal Decimal Classification (UDC), using machine-learning methods.

2895

Abstract

Purpose

The purpose of this study is to develop a model for automated classification of old digitised texts to the Universal Decimal Classification (UDC), using machine-learning methods.

Design/methodology/approach

The general research approach is inherent to design science research, in which the problem of UDC assignment of the old, digitised texts is addressed by developing a machine-learning classification model. A corpus of 70,000 scholarly texts, fully bibliographically processed by librarians, was used to train and test the model, which was used for classification of old texts on a corpus of 200,000 items. Human experts evaluated the performance of the model.

Findings

Results suggest that machine-learning models can correctly assign the UDC at some level for almost any scholarly text. Furthermore, the model can be recommended for the UDC assignment of older texts. Ten librarians corroborated this on 150 randomly selected texts.

Research limitations/implications

The main limitations of this study were unavailability of labelled older texts and the limited availability of librarians.

Practical implications

The classification model can provide a recommendation to the librarians during their classification work; furthermore, it can be implemented as an add-on to full-text search in the library databases.

Social implications

The proposed methodology supports librarians by recommending UDC classifiers, thus saving time in their daily work. By automatically classifying older texts, digital libraries can provide a better user experience by enabling structured searches. These contribute to making knowledge more widely available and useable.

Originality/value

These findings contribute to the field of automated classification of bibliographical information with the usage of full texts, especially in cases in which the texts are old, unstructured and in which archaic language and vocabulary are used.

Details

Journal of Documentation, vol. 77 no. 3
Type: Research Article
ISSN: 0022-0418

Keywords

Open Access
Article
Publication date: 31 July 2023

Sara Lafia, David A. Bleckley and J. Trent Alexander

Many libraries and archives maintain collections of research documents, such as administrative records, with paper-based formats that limit the documents' access to in-person use…

Abstract

Purpose

Many libraries and archives maintain collections of research documents, such as administrative records, with paper-based formats that limit the documents' access to in-person use. Digitization transforms paper-based collections into more accessible and analyzable formats. As collections are digitized, there is an opportunity to incorporate deep learning techniques, such as Document Image Analysis (DIA), into workflows to increase the usability of information extracted from archival documents. This paper describes the authors' approach using digital scanning, optical character recognition (OCR) and deep learning to create a digital archive of administrative records related to the mortgage guarantee program of the Servicemen's Readjustment Act of 1944, also known as the G.I. Bill.

Design/methodology/approach

The authors used a collection of 25,744 semi-structured paper-based records from the administration of G.I. Bill Mortgages from 1946 to 1954 to develop a digitization and processing workflow. These records include the name and city of the mortgagor, the amount of the mortgage, the location of the Reconstruction Finance Corporation agent, one or more identification numbers and the name and location of the bank handling the loan. The authors extracted structured information from these scanned historical records in order to create a tabular data file and link them to other authoritative individual-level data sources.

Findings

The authors compared the flexible character accuracy of five OCR methods. The authors then compared the character error rate (CER) of three text extraction approaches (regular expressions, DIA and named entity recognition (NER)). The authors were able to obtain the highest quality structured text output using DIA with the Layout Parser toolkit by post-processing with regular expressions. Through this project, the authors demonstrate how DIA can improve the digitization of administrative records to automatically produce a structured data resource for researchers and the public.

Originality/value

The authors' workflow is readily transferable to other archival digitization projects. Through the use of digital scanning, OCR and DIA processes, the authors created the first digital microdata file of administrative records related to the G.I. Bill mortgage guarantee program available to researchers and the general public. These records offer research insights into the lives of veterans who benefited from loans, the impacts on the communities built by the loans and the institutions that implemented them.

Details

Journal of Documentation, vol. 79 no. 7
Type: Research Article
ISSN: 0022-0418

Keywords

1 – 10 of over 12000