Search results
1 – 10 of 468Martin Nečaský, Petr Škoda, David Bernhauer, Jakub Klímek and Tomáš Skopal
Semantic retrieval and discovery of datasets published as open data remains a challenging task. The datasets inherently originate in the globally distributed web jungle, lacking…
Abstract
Purpose
Semantic retrieval and discovery of datasets published as open data remains a challenging task. The datasets inherently originate in the globally distributed web jungle, lacking the luxury of centralized database administration, database schemes, shared attributes, vocabulary, structure and semantics. The existing dataset catalogs provide basic search functionality relying on keyword search in brief, incomplete or misleading textual metadata attached to the datasets. The search results are thus often insufficient. However, there exist many ways of improving the dataset discovery by employing content-based retrieval, machine learning tools, third-party (external) knowledge bases, countless feature extraction methods and description models and so forth.
Design/methodology/approach
In this paper, the authors propose a modular framework for rapid experimentation with methods for similarity-based dataset discovery. The framework consists of an extensible catalog of components prepared to form custom pipelines for dataset representation and discovery.
Findings
The study proposes several proof-of-concept pipelines including experimental evaluation, which showcase the usage of the framework.
Originality/value
To the best of authors’ knowledge, there is no similar formal framework for experimentation with various similarity methods in the context of dataset discovery. The framework has the ambition to establish a platform for reproducible and comparable research in the area of dataset discovery. The prototype implementation of the framework is available on GitHub.
Details
Keywords
Olanrewaju Ayobami Omoya, Kassandra A. Papadopoulou and Eric Lou
The purpose of this paper is to investigate the application of reliability engineering to oil and gas (O&G) pipeline systems with the aim of identifying means through which…
Abstract
Purpose
The purpose of this paper is to investigate the application of reliability engineering to oil and gas (O&G) pipeline systems with the aim of identifying means through which reliability engineering can be used to improve pipeline integrity, specifically with regard to man-made incidents (e.g. material/weld/equipment failure, corrosion, incorrect operation and excavation damages).
Design/methodology/approach
A literature review was carried out on the application of reliability tools to O&G pipeline systems and four case studies are presented as examples of how reliability engineering can help to improve pipeline integrity. The scope of the paper is narrowed to four stages of the pipeline life cycle; the decommissioning stage is not part of this research. A survey was also carried out using a questionnaire to check the level of application of reliability tools in the O&G industry.
Findings
Data from survey and literature show that a reliability-centred approach can be applied and will improve pipeline reliability where applied; however, there are several hindrances to the effective application of reliability tools, the current methods are time based and focus mainly on design against failure rather than design for reliability.
Research limitations/implications
The tools identified do not cover the decommissioning of the pipeline system. Research validation sample size can be broadened to include more pipeline stakeholders/professionals. Pipeline integrity management systems are proprietary information and permission is required from stakeholders to do a detailed practical study.
Originality/value
This paper proposes the minimum applied reliability tools for application during the design, operation and maintenance phases targeted at the O&G industry. Critically, this paper provides a case for an integrated approach to applying reliability and maintenance tools that are required to reduce pipeline failure incidents in the O&G industry.
Details
Keywords
Cris Koutsougeras, Mohammad Saadeh and Ahmad Fayed
This modeling facilitates the determination of control responses (or possibly reconfiguration) upon such events and the identification of which segments of the pipeline can…
Abstract
Purpose
This modeling facilitates the determination of control responses (or possibly reconfiguration) upon such events and the identification of which segments of the pipeline can continue to function uninterrupted. Based on this modeling, an algorithm is presented to implement the control responses and to establish this determination. In this work, the authors propose using Message Queuing Telemetry Transport (MQTT), which is an integrated method to perform the system-wide control based on message exchanging among local node controllers (agents) and the global controller (broker).
Design/methodology/approach
Complex manufacturing lines in industrial plants are designed to accomplish an overall task in an incremental mode. This typically consists of a sequence of smaller tasks organized as cascaded processing nodes with local controls, which must be coordinated and aided by a system-wide (global) controller. This work presents a logic modeling technique for such pipelines and a method for using its logic to determine the consequent effects of events where a node halts/fails on the overall operation.
Findings
The method uses a protocol for establishing communication of node events and the algorithm to determine the consequences of node events in order to produce global control directives, which are communicated back to node controllers over MQTT. The algorithm is simulated using a complex manufacturing line with arbitrary events to illustrate the sequence of events and the agents–broker message exchanging.
Originality/value
This approach (MQTT) is a relatively new concept in Cyber-Physical Systems. The proposed example of feed-forward is not new; however, for illustration purposes, it was suggested that a feed-forward be used. Future works will consider practical examples that are at the core of the manufacturing processes.
Details
Keywords
S. J. Oswald A. J. Mascarenhas
“The unexamined life is not worth living” (Socrates). That is, without critically inquiring into the knowledge of life which is well-being and valuable, life is not worth living…
Abstract
Executive Summary
“The unexamined life is not worth living” (Socrates). That is, without critically inquiring into the knowledge of life which is well-being and valuable, life is not worth living. Critical thinking questions existing theories and their unexamined and obsessive assumptions and generalizations, constraints, and “best” practices of the prevailing system of management and tries to replace them with more valid assumptions and generalizations that uphold the dignity, uniqueness, and inalienable rights of the individual person and the community. Better outcomes result from asking the right questions than from having the right answers. In the diverse, pluralist cultural environment of today, the promise of a truly generative dialog among Occidental (Western) and Oriental (Eastern) cultures and civilizations holds great hope for the future. Critical thinking (CT) is an “inclusive” thinking system that can facilitate this dialog such that all of us have a meaningful space and place in this universe. After defining CT and arguing its importance for executives, this chapter introduces CT in two parts: Part 1: Various Approaches to Critical Thinking; Part 2: Major Theories of Critical Thinking. Several contemporary business cases will be invoked to illustrate the need, nature, and scope of corporate CT.
To elaborate the picture of credibility assessment by examining how participants of online discussion evaluate the informational credibility of conspiracy theories.
Abstract
Purpose
To elaborate the picture of credibility assessment by examining how participants of online discussion evaluate the informational credibility of conspiracy theories.
Design/methodology/approach
Descriptive quantitative analysis and qualitative content analysis of 2,663 posts submitted to seven Reddit threads discussing a conspiracy operation, that is, the damage of the Nord Stream gas pipelines in September 2022. It was examined how the participants of online discussion assess the credibility of information constitutive of conspiracy theories speculating about (1) suspected actors responsible for the damage, (2) their motives and (3) the ways in which the damage was made. The credibility assessments focussed on diverse sources offering information about the above three factors.
Findings
The participants assessed the credibility of information by drawing on four main criteria: plausibility of arguments, honesty in argumentation, similarity to one's beliefs and provision of evidence. Most assessments were negative and indicated doubt about the informational believability of conspiracy theories about the damage. Of the information sources referred to in the discussion, the posts submitted by fellow participants, television programmes and statements provided by governmental organizations were judged most critically, due to implausible argumentation and advocacy of biased views.
Research limitations/implications
As the study focuses on a sample of posts dealing with conspiracy theories about a particular event, the findings cannot be generalized to concern the informational credibility conspiracy narratives.
Originality/value
The study pioneers by providing an in-depth analysis of the nature of credibility assessments by focussing on information constitutive of conspiracy theories.
Details
Keywords
Jochen Wirtz, Kevin Kam Fung So, Makarand Amrish Mody, Stephanie Q. Liu and HaeEun Helen Chun
The purpose of this paper is to examine peer-to-peer sharing platform business models, their sources of competitive advantage, and the roles, motivations and behaviors of key…
Abstract
Purpose
The purpose of this paper is to examine peer-to-peer sharing platform business models, their sources of competitive advantage, and the roles, motivations and behaviors of key actors in their ecosystems.
Design/methodology/approach
This paper uses a conceptual approach that is rooted in the service, tourism and hospitality, and strategy literature.
Findings
First, this paper defines key types of platform business models in the sharing economy anddescribes their characteristics. In particular, the authors propose the differentiation between sharing platforms of capacity-constrained vs capacity-unconstrained assets and advance five core properties of the former. Second, the authors contrast platform business models with their pipeline business model counterparts to understand the fundamental differences between them. One important conclusion is that platforms cater to vastly more heterogeneous assets and consumer needs and, therefore, require liquidity and analytics for high-quality matching. Third, the authors examine the competitive position of platforms and conclude that their widely taken “winner takes it all” assumption is not valid. Primary network effects are less important once a critical level of liquidity has been reached and may even turn negative if increased listings raise friction in the form of search costs. Once a critical level of liquidity has been reached, a platform’s competitive position depends on stakeholder trust and service provider and user loyalty. Fourth, the authors integrate and synthesize the literature on key platform stakeholders of platform businesses (i.e. users, service providers, and regulators) and their roles and motivations. Finally, directions for further research are advanced.
Practical implications
This paper helps platform owners, service providers and users understand better the implications of sharing platform business models and how to position themselves in such ecosystems.
Originality/value
This paper integrates the extant literature on sharing platforms, takes a novel approach in delineating their key properties and dimensions, and provides insights into the evolving and dynamic forms of sharing platforms including converging business models.
Details
Keywords
Shreyesh Doppalapudi, Tingyan Wang and Robin Qiu
Clinical notes typically contain medical jargons and specialized words and phrases that are complicated and technical to most people, which is one of the most challenging…
Abstract
Purpose
Clinical notes typically contain medical jargons and specialized words and phrases that are complicated and technical to most people, which is one of the most challenging obstacles in health information dissemination to consumers by healthcare providers. The authors aim to investigate how to leverage machine learning techniques to transform clinical notes of interest into understandable expressions.
Design/methodology/approach
The authors propose a natural language processing pipeline that is capable of extracting relevant information from long unstructured clinical notes and simplifying lexicons by replacing medical jargons and technical terms. Particularly, the authors develop an unsupervised keywords matching method to extract relevant information from clinical notes. To automatically evaluate completeness of the extracted information, the authors perform a multi-label classification task on the relevant texts. To simplify lexicons in the relevant text, the authors identify complex words using a sequence labeler and leverage transformer models to generate candidate words for substitution. The authors validate the proposed pipeline using 58,167 discharge summaries from critical care services.
Findings
The results show that the proposed pipeline can identify relevant information with high completeness and simplify complex expressions in clinical notes so that the converted notes have a high level of readability but a low degree of meaning change.
Social implications
The proposed pipeline can help healthcare consumers well understand their medical information and therefore strengthen communications between healthcare providers and consumers for better care.
Originality/value
An innovative pipeline approach is developed to address the health literacy problem confronted by healthcare providers and consumers in the ongoing digital transformation process in the healthcare industry.
Details
Keywords
The Black Sea region has become as an important energy transit route for Caspian and Russian oil and natural gas to western markets. Since 1996 the quantity of oil exported from…
Abstract
The Black Sea region has become as an important energy transit route for Caspian and Russian oil and natural gas to western markets. Since 1996 the quantity of oil exported from the Black Sea through the Turkish Straits and the number of transiting tankers has doubled and will continue to expand. However, these are also two waterways where the risk of either an accidental or intentional disaster is significant bringing serious repercussions for energy supply security. This paper will analyze measures taken by Black Sea coastal States to provide for secure ports and shipping against accidental and intentional disasters. The paper will examine the role of technology, such as satellite based VTS providers in the Black Sea, implementation of the ISPS Code, the role of the relatively new BlackSeaFor in providing both port and navigational security. The paper will further make recommendations for further improvements for enhancement of security emergency response planning. In addition, the paper will examine current security measures taken by the Turkish Administration for oil transportation through the Turkish Straits.
Details
Keywords
Luca Rampini and Fulvio Re Cecconi
This study aims to introduce a new methodology for generating synthetic images for facility management purposes. The method starts by leveraging the existing 3D open-source BIM…
Abstract
Purpose
This study aims to introduce a new methodology for generating synthetic images for facility management purposes. The method starts by leveraging the existing 3D open-source BIM models and using them inside a graphic engine to produce a photorealistic representation of indoor spaces enriched with facility-related objects. The virtual environment creates several images by changing lighting conditions, camera poses or material. Moreover, the created images are labeled and ready to be trained in the model.
Design/methodology/approach
This paper focuses on the challenges characterizing object detection models to enrich digital twins with facility management-related information. The automatic detection of small objects, such as sockets, power plugs, etc., requires big, labeled data sets that are costly and time-consuming to create. This study proposes a solution based on existing 3D BIM models to produce quick and automatically labeled synthetic images.
Findings
The paper presents a conceptual model for creating synthetic images to increase the performance in training object detection models for facility management. The results show that virtually generated images, rather than an alternative to real images, are a powerful tool for integrating existing data sets. In other words, while a base of real images is still needed, introducing synthetic images helps augment the model’s performance and robustness in covering different types of objects.
Originality/value
This study introduced the first pipeline for creating synthetic images for facility management. Moreover, this paper validates this pipeline by proposing a case study where the performance of object detection models trained on real data or a combination of real and synthetic images are compared.
Details
Keywords
Stefan Jooss, Julia Lenz and Ralf Burbach
This paper aims to unpack how small and medium-sized enterprises (SMEs) can operationalise coopetition in talent management, addressing ongoing talent shortages in the hospitality…
Abstract
Purpose
This paper aims to unpack how small and medium-sized enterprises (SMEs) can operationalise coopetition in talent management, addressing ongoing talent shortages in the hospitality industry which were intensified during the Covid-19 pandemic.
Design/methodology/approach
This conceptual paper draws from literature on coopetition and talent management in SMEs. Specifically, the authors take an interorganisational talent pool lens and develop a framework following the principles of open-systems theory.
Findings
The authors find that the traditional use of talent pools is often impractical for SMEs because of a lack of resources and capabilities. Instead, interorganisational talent pools, through coopetition in talent management, can aid these firms to address talent shortages. The authors identify potential for SME coopetition at various stages, including attraction, development and retention of talent.
Practical implications
Coopetition in talent management can aid industries in establishing market-thickening pipelines. Through co-attracting, co-developing and co-retaining talent, SMEs can create interorganisational talent pools. To develop talent management coopetition, a set of prerequisites, catalysts and potential inhibitors must be analysed and managed.
Originality/value
This paper moves the talent management debate beyond competition for talent, introducing coopetition as a viable alternative. Taking an open-systems perspective, the authors develop an integrative framework for coopetition in talent management in SMEs encompassing input, process and output components. The authors reveal the dynamic and complex nature of this coopetition process, highlighting the essential role of coopetition context and illustrating open-system principles.
Details