Search results
1 – 10 of over 3000Abstract
Details
Keywords
Tomás Lopes and Sérgio Guerreiro
Testing business processes is crucial to assess the compliance of business process models with requirements. Automating this task optimizes testing efforts and reduces human error…
Abstract
Purpose
Testing business processes is crucial to assess the compliance of business process models with requirements. Automating this task optimizes testing efforts and reduces human error while also providing improvement insights for the business process modeling activity. The primary purposes of this paper are to conduct a literature review of Business Process Model and Notation (BPMN) testing and formal verification and to propose the Business Process Evaluation and Research Framework for Enhancement and Continuous Testing (bPERFECT) framework, which aims to guide business process testing (BPT) research and implementation. Secondary objectives include (1) eliciting the existing types of testing, (2) evaluating their impact on efficiency and (3) assessing the formal verification techniques that complement testing.
Design/methodology/approach
The methodology used is based on Kitchenham's (2004) original procedures for conducting systematic literature reviews.
Findings
Results of this study indicate that three distinct business process model testing types can be found in the literature: black/gray-box, regression and integration. Testing and verification approaches differ in aspects such as awareness of test data, coverage criteria and auxiliary representations used. However, most solutions pose notable hindrances, such as BPMN element limitations, that lead to limited practicality.
Research limitations/implications
The databases selected in the review protocol may have excluded relevant studies on this topic. More databases and gray literature could also be considered for inclusion in this review.
Originality/value
Three main originality aspects are identified in this study as follows: (1) the classification of process model testing types, (2) the future trends foreseen for BPMN model testing and verification and (3) the bPERFECT framework for testing business processes.
Details
Keywords
Abstract
Details
Keywords
Abstract
Details
Keywords
Zhuoxuan Jiang, Chunyan Miao and Xiaoming Li
Recent years have witnessed the rapid development of massive open online courses (MOOCs). With more and more courses being produced by instructors and being participated by…
Abstract
Purpose
Recent years have witnessed the rapid development of massive open online courses (MOOCs). With more and more courses being produced by instructors and being participated by learners all over the world, unprecedented massive educational resources are aggregated. The educational resources include videos, subtitles, lecture notes, quizzes, etc., on the teaching side, and forum contents, Wiki, log of learning behavior, log of homework, etc., on the learning side. However, the data are both unstructured and diverse. To facilitate knowledge management and mining on MOOCs, extracting keywords from the resources is important. This paper aims to adapt the state-of-the-art techniques to MOOC settings and evaluate the effectiveness on real data. In terms of practice, this paper also tries to answer the questions for the first time that to what extend can the MOOC resources support keyword extraction models, and how many human efforts are required to make the models work well.
Design/methodology/approach
Based on which side generates the data, i.e instructors or learners, the data are classified to teaching resources and learning resources, respectively. The approach used on teaching resources is based on machine learning models with labels, while the approach used on learning resources is based on graph model without labels.
Findings
From the teaching resources, the methods used by the authors can accurately extract keywords with only 10 per cent labeled data. The authors find a characteristic of the data that the resources of various forms, e.g. subtitles and PPTs, should be separately considered because they have the different model ability. From the learning resources, the keywords extracted from MOOC forums are not as domain-specific as those extracted from teaching resources, but they can reflect the topics which are lively discussed in forums. Then instructors can get feedback from the indication. The authors implement two applications with the extracted keywords: generating concept map and generating learning path. The visual demos show they have the potential to improve learning efficiency when they are integrated into a real MOOC platform.
Research limitations/implications
Conducting keyword extraction on MOOC resources is quite difficult because teaching resources are hard to be obtained due to copyrights. Also, getting labeled data is tough because usually expertise of the corresponding domain is required.
Practical implications
The experiment results support that MOOC resources are good enough for building models of keyword extraction, and an acceptable balance between human efforts and model accuracy can be achieved.
Originality/value
This paper presents a pioneer study on keyword extraction on MOOC resources and obtains some new findings.
Details
Keywords
Per Håkon Meland, Karin Bernsmed, Christian Frøystad, Jingyue Li and Guttorm Sindre
Within critical-infrastructure industries, bow-tie analysis is an established way of eliciting requirements for safety and reliability concerns. Because of the ever-increasing…
Abstract
Purpose
Within critical-infrastructure industries, bow-tie analysis is an established way of eliciting requirements for safety and reliability concerns. Because of the ever-increasing digitalisation and coupling between the cyber and physical world, security has become an additional concern in these industries. The purpose of this paper is to evaluate how well bow-tie analysis performs in the context of security, and the study’s hypothesis is that the bow-tie notation has a suitable expressiveness for security and safety.
Design/methodology/approach
This study uses a formal, controlled quasi-experiment on two sample populations – security experts and security graduate students – working on the same case. As a basis for comparison, the authors used a similar experiment with misuse case analysis, a well-known technique for graphical security modelling.
Findings
The results show that the collective group of graduate students, inexperienced in security modelling, perform similarly as security experts in a well-defined scope and familiar target system/situation. The students showed great creativity, covering most of the same threats and consequences as the experts identified and discovering additional ones. One notable difference was that these naïve professionals tend to focus on preventive barriers, leading to requirements for risk mitigation or avoidance, while experienced professionals seem to balance this more with reactive barriers and requirements for incident management.
Originality/value
Our results are useful in areas where we need to evaluate safety and security concerns together, especially for domains that have experience in health, safety and environmental hazards, but now need to expand this with cybersecurity as well.
Details