Search results
1 – 10 of 175Sheak Salman, Shah Murtoza Morshed, Md. Rezaul Karim, Rafat Rahman, Sadia Hasanat and Afia Ahsan
The imperative to conserve resources and minimize operational expenses has spurred a notable increase in the adoption of lean manufacturing within the context of the circular…
Abstract
Purpose
The imperative to conserve resources and minimize operational expenses has spurred a notable increase in the adoption of lean manufacturing within the context of the circular economy across diverse industries in recent years. However, a notable gap exists in the research landscape, particularly concerning the implementation of lean practices within the pharmaceutical industry to enhance circular economy performance. Addressing this void, this study endeavors to identify and prioritize the pivotal drivers influencing lean manufacturing within the pharmaceutical sector.
Findings
The outcome of this rigorous examination highlights that “Continuous Monitoring Process for Sustainable Lean Implementation,” “Management Involvement for Sustainable Implementation” and “Training and Education” emerge as the most consequential drivers. These factors are deemed crucial for augmenting circular economy performance, underscoring the significance of management engagement, training initiatives and a continuous monitoring process in fostering a closed-loop practice within the pharmaceutical industry.
Research limitations/implications
The findings contribute valuable insights for decision-makers aiming to adopt lean practices within a circular economy framework. Specifically, by streamlining the process of developing a robust action plan tailored to the unique needs of the pharmaceutical sector, our study provides actionable guidance for enhancing overall sustainability in the manufacturing processes.
Originality/value
This study represents one of the initial efforts to systematically identify and assess the drivers to LM implementation within the pharmaceutical industry, contributing to the emerging body of knowledge in this area.
Details
Keywords
Tomás Lopes and Sérgio Guerreiro
Testing business processes is crucial to assess the compliance of business process models with requirements. Automating this task optimizes testing efforts and reduces human error…
Abstract
Purpose
Testing business processes is crucial to assess the compliance of business process models with requirements. Automating this task optimizes testing efforts and reduces human error while also providing improvement insights for the business process modeling activity. The primary purposes of this paper are to conduct a literature review of Business Process Model and Notation (BPMN) testing and formal verification and to propose the Business Process Evaluation and Research Framework for Enhancement and Continuous Testing (bPERFECT) framework, which aims to guide business process testing (BPT) research and implementation. Secondary objectives include (1) eliciting the existing types of testing, (2) evaluating their impact on efficiency and (3) assessing the formal verification techniques that complement testing.
Design/methodology/approach
The methodology used is based on Kitchenham's (2004) original procedures for conducting systematic literature reviews.
Findings
Results of this study indicate that three distinct business process model testing types can be found in the literature: black/gray-box, regression and integration. Testing and verification approaches differ in aspects such as awareness of test data, coverage criteria and auxiliary representations used. However, most solutions pose notable hindrances, such as BPMN element limitations, that lead to limited practicality.
Research limitations/implications
The databases selected in the review protocol may have excluded relevant studies on this topic. More databases and gray literature could also be considered for inclusion in this review.
Originality/value
Three main originality aspects are identified in this study as follows: (1) the classification of process model testing types, (2) the future trends foreseen for BPMN model testing and verification and (3) the bPERFECT framework for testing business processes.
Details
Keywords
Edoardo Ramalli and Barbara Pernici
Experiments are the backbone of the development process of data-driven predictive models for scientific applications. The quality of the experiments directly impacts the model…
Abstract
Purpose
Experiments are the backbone of the development process of data-driven predictive models for scientific applications. The quality of the experiments directly impacts the model performance. Uncertainty inherently affects experiment measurements and is often missing in the available data sets due to its estimation cost. For similar reasons, experiments are very few compared to other data sources. Discarding experiments based on the missing uncertainty values would preclude the development of predictive models. Data profiling techniques are fundamental to assess data quality, but some data quality dimensions are challenging to evaluate without knowing the uncertainty. In this context, this paper aims to predict the missing uncertainty of the experiments.
Design/methodology/approach
This work presents a methodology to forecast the experiments’ missing uncertainty, given a data set and its ontological description. The approach is based on knowledge graph embeddings and leverages the task of link prediction over a knowledge graph representation of the experiments database. The validity of the methodology is first tested in multiple conditions using synthetic data and then applied to a large data set of experiments in the chemical kinetic domain as a case study.
Findings
The analysis results of different test case scenarios suggest that knowledge graph embedding can be used to predict the missing uncertainty of the experiments when there is a hidden relationship between the experiment metadata and the uncertainty values. The link prediction task is also resilient to random noise in the relationship. The knowledge graph embedding outperforms the baseline results if the uncertainty depends upon multiple metadata.
Originality/value
The employment of knowledge graph embedding to predict the missing experimental uncertainty is a novel alternative to the current and more costly techniques in the literature. Such contribution permits a better data quality profiling of scientific repositories and improves the development process of data-driven models based on scientific experiments.
Details
Keywords
Sofia Baroncini, Bruno Sartini, Marieke Van Erp, Francesca Tomasi and Aldo Gangemi
In the last few years, the size of Linked Open Data (LOD) describing artworks, in general or domain-specific Knowledge Graphs (KGs), is gradually increasing. This provides…
Abstract
Purpose
In the last few years, the size of Linked Open Data (LOD) describing artworks, in general or domain-specific Knowledge Graphs (KGs), is gradually increasing. This provides (art-)historians and Cultural Heritage professionals with a wealth of information to explore. Specifically, structured data about iconographical and iconological (icon) aspects, i.e. information about the subjects, concepts and meanings of artworks, are extremely valuable for the state-of-the-art of computational tools, e.g. content recognition through computer vision. Nevertheless, a data quality evaluation for art domains, fundamental for data reuse, is still missing. The purpose of this study is filling this gap with an overview of art-historical data quality in current KGs with a focus on the icon aspects.
Design/methodology/approach
This study’s analyses are based on established KG evaluation methodologies, adapted to the domain by addressing requirements from art historians’ theories. The authors first select several KGs according to Semantic Web principles. Then, the authors evaluate (1) their structures’ suitability to describe icon information through quantitative and qualitative assessment and (2) their content, qualitatively assessed in terms of correctness and completeness.
Findings
This study’s results reveal several issues on the current expression of icon information in KGs. The content evaluation shows that these domain-specific statements are generally correct but often not complete. The incompleteness is confirmed by the structure evaluation, which highlights the unsuitability of the KG schemas to describe icon information with the required granularity.
Originality/value
The main contribution of this work is an overview of the actual landscape of the icon information expressed in LOD. Therefore, it is valuable to cultural institutions by providing them a first domain-specific data quality evaluation. Since this study’s results suggest that the selected domain information is underrepresented in Semantic Web datasets, the authors highlight the need for the creation and fostering of such information to provide a more thorough art-historical dimension to LOD.
Details
Keywords
Athitaya Nitchot and Lester Gilbert
Our study aims to focus on the application of knowledge mapping to provide pedagogically-structured learners' competences.
Abstract
Purpose
Our study aims to focus on the application of knowledge mapping to provide pedagogically-structured learners' competences.
Design/methodology/approach
We conducted an experiment examined the associations between the pedagogical quality of students’ pedagogically-informed knowledge (PIK) maps, class assignment scores and perceptions of PIK mapping’s uses.
Findings
The results showed that higher assignment scores were significantly predicted by higher quality PIK maps, ratings for PIK mapping were significantly higher than other mappings, and the learners’ experience of PIK mapping led to a significant change of attitude towards mapping as a learning activity and to a positive opinion of the value of PIK mapping in particular. Interestingly, there was no significant relation between learners’ opinion ratings of the uses of PIK mapping in learning and their assignment scores.
Originality/value
Questions remain on the generalizability of the findings, and on the features of a PIK map which are particularly useful to a learner. This study investigated the value of PIK mapping in the context of a practical class on the building of simple DIY (do-it-yourself) holographic projectors; it may be thought that the applied nature of the topic was more suited to the PIK mapping of learner competences and intended learning outcomes than a more theoretic classroom topic on holography. A future study is planned to address this issue.
Details
Keywords
Ramy Shaheen, Suhail Mahfud and Ali Kassem
This paper aims to study Irreversible conversion processes, which examine the spread of a one way change of state (from state 0 to state 1) through a specified society (the spread…
Abstract
Purpose
This paper aims to study Irreversible conversion processes, which examine the spread of a one way change of state (from state 0 to state 1) through a specified society (the spread of disease through populations, the spread of opinion through social networks, etc.) where the conversion rule is determined at the beginning of the study. These processes can be modeled into graph theoretical models where the vertex set V(G) represents the set of individuals on which the conversion is spreading.
Design/methodology/approach
The irreversible k-threshold conversion process on a graph G=(V,E) is an iterative process which starts by choosing a set S_0?V, and for each step t (t = 1, 2,…,), S_t is obtained from S_(t−1) by adjoining all vertices that have at least k neighbors in S_(t−1). S_0 is called the seed set of the k-threshold conversion process and is called an irreversible k-threshold conversion set (IkCS) of G if S_t = V(G) for some t = 0. The minimum cardinality of all the IkCSs of G is referred to as the irreversible k-threshold conversion number of G and is denoted by C_k (G).
Findings
In this paper the authors determine C_k (G) for generalized Jahangir graph J_(s,m) for 1 < k = m and s, m are arbitraries. The authors also determine C_k (G) for strong grids P_2? P_n when k = 4, 5. Finally, the authors determine C_2 (G) for P_n? P_n when n is arbitrary.
Originality/value
This work is 100% original and has important use in real life problems like Anti-Bioterrorism.
Details
Keywords
Manuel Rossetti, Juliana Bright, Andrew Freeman, Anna Lee and Anthony Parrish
This paper is motivated by the need to assess the risk profiles associated with the substantial number of items within military supply chains. The scale of supply chain management…
Abstract
Purpose
This paper is motivated by the need to assess the risk profiles associated with the substantial number of items within military supply chains. The scale of supply chain management processes creates difficulties in both the complexity of the analysis and in performing risk assessments that are based on the manual (human analyst) assessment methods. Thus, analysts require methods that can be automated and that can incorporate on-going operational data on a regular basis.
Design/methodology/approach
The approach taken to address the identification of supply chain risk within an operational setting is based on aspects of multiobjective decision analysis (MODA). The approach constructs a risk and importance index for supply chain elements based on operational data. These indices are commensurate in value, leading to interpretable measures for decision-making.
Findings
Risk and importance indices were developed for the analysis of items within an example supply chain. Using the data on items, individual MODA models were formed and demonstrated using a prototype tool.
Originality/value
To better prepare risk mitigation strategies, analysts require the ability to identify potential sources of risk, especially in times of disruption such as natural disasters.
Details
Keywords
This study investigated the visibility of carbon emissions allowances accounting in the financial reports of 32 clean development mechanism (CDM) projects in the UAE to uncover…
Abstract
Purpose
This study investigated the visibility of carbon emissions allowances accounting in the financial reports of 32 clean development mechanism (CDM) projects in the UAE to uncover the obstacles to setting consistent standards for carbon emission accounting. As carbon emissions are monetized as credits, consistent accounting standards can aid decision-makers in the development of carbon emission mitigation strategies.
Design/methodology/approach
This study used a grounded theoretical framework for exploring the terms used in the policy documents of international accounting bodies regarding accounting standards and guidelines for carbon emission credits. Raw qualitative data were gathered, and an inductive approach was used by analyzing documents from various sources using the qualitative data text analysis software QDA Miner 6.
Findings
The findings showed that the financial statement reports of the corporations did not include disclosure of the carbon credit account. This omission was due to the lack of global standardization of carbon credit accounts and emission allowance recognition. This may hinder the production of a comprehensive report containing accurate and valuable financial information relevant to all stakeholders.
Originality/value
The study is among the first to use a grounded theoretical framework to investigate whether corporations are applying common standards and guidelines for carbon emissions accounting.
Details
Keywords
Manuel J. Sánchez-Franco and Sierra Rey-Tienda
This research proposes to organise and distil this massive amount of data, making it easier to understand. Using data mining, machine learning techniques and visual approaches…
Abstract
Purpose
This research proposes to organise and distil this massive amount of data, making it easier to understand. Using data mining, machine learning techniques and visual approaches, researchers and managers can extract valuable insights (on guests' preferences) and convert them into strategic thinking based on exploration and predictive analysis. Consequently, this research aims to assist hotel managers in making informed decisions, thus improving the overall guest experience and increasing competitiveness.
Design/methodology/approach
This research employs natural language processing techniques, data visualisation proposals and machine learning methodologies to analyse unstructured guest service experience content. In particular, this research (1) applies data mining to evaluate the role and significance of critical terms and semantic structures in hotel assessments; (2) identifies salient tokens to depict guests' narratives based on term frequency and the information quantity they convey; and (3) tackles the challenge of managing extensive document repositories through automated identification of latent topics in reviews by using machine learning methods for semantic grouping and pattern visualisation.
Findings
This study’s findings (1) aim to identify critical features and topics that guests highlight during their hotel stays, (2) visually explore the relationships between these features and differences among diverse types of travellers through online hotel reviews and (3) determine predictive power. Their implications are crucial for the hospitality domain, as they provide real-time insights into guests' perceptions and business performance and are essential for making informed decisions and staying competitive.
Originality/value
This research seeks to minimise the cognitive processing costs of the enormous amount of content published by the user through a better organisation of hotel service reviews and their visualisation. Likewise, this research aims to propose a methodology and method available to tourism organisations to obtain truly useable knowledge in the design of the hotel offer and its value propositions.
Details
Keywords
Suchismita Swain, Kamalakanta Muduli, Anil Kumar and Sunil Luthra
The goal of this research is to analyse the obstacles to the implementation of mobile health (mHealth) in India and to gain an understanding of the contextual inter-relationships…
Abstract
Purpose
The goal of this research is to analyse the obstacles to the implementation of mobile health (mHealth) in India and to gain an understanding of the contextual inter-relationships that exist amongst those obstacles.
Design/methodology/approach
Potential barriers and their interrelationships in their respective contexts have been uncovered. Using MICMAC analysis, the categorization of these barriers was done based on their degree of reliance and driving power (DP). Furthermore, an interpretive structural modeling (ISM) framework for the barriers to mHealth activities in India has been proposed.
Findings
The study explores a total of 15 factors that reduce the efficiency of mHealth adoption in India. The findings of the Matrix Cross-Reference Multiplication Applied to a Classification (MICMAC) investigation show that the economic situation of the government, concerns regarding the safety of intellectual technologies and privacy issues are the primary obstacles because of the significant driving power they have in mHealth applications.
Practical implications
Promoters of mHealth practices may be able to make better plans if they understand the social barriers and how they affect each other; this leads to easier adoption of these practices. The findings of this study might be helpful for governments of developing nations to produce standards relating to the deployment of mHealth; this will increase the efficiency with which it is adopted.
Originality/value
At this time, there is no comprehensive analysis of the factors that influence the adoption of mobile health care with social cognitive theory in developing nations like India. In addition, there is a lack of research in investigating how each of these elements affects the success of mHealth activities and how the others interact with them. Because developed nations learnt the value of mHealth practices during the recent pandemic, this study, by investigating the obstacles to the adoption of mHealth and their inter-relationships, makes an important addition to both theory and practice.
Details