Search results
1 – 10 of over 62000Jan-Halvard Bergquist, Samantha Tinet and Shang Gao
The purpose of this study is to create an information classification model that is tailored to suit the specific needs of public sector organizations in Sweden.
Abstract
Purpose
The purpose of this study is to create an information classification model that is tailored to suit the specific needs of public sector organizations in Sweden.
Design/methodology/approach
To address the purpose of this research, a case study in a Swedish municipality was conducted. Data was collected through a mixture of techniques such as literature, document and website review. Empirical data was collected through interviews with 11 employees working within 7 different sections of the municipality.
Findings
This study resulted in an information classification model that is tailored to the specific needs of Swedish municipalities. In addition, a set of steps for tailoring an information classification model to suit a specific public organization are recommended. The findings also indicate that for a successful information classification it is necessary to educate the employees about the basics of information security and classification and create an understandable and unified information security language.
Practical implications
This study also highlights that to have a tailored information classification model, it is imperative to understand the value of information and what kind of consequences a violation of established information security principles could have through the perspectives of the employees.
Originality/value
It is the first of its kind in tailoring an information classification model to the specific needs of a Swedish municipality. The model provided by this study can be used as a tool to facilitate a common ground for classifying information within all Swedish municipalities, thereby contributing the first step toward a Swedish municipal model for information classification.
Details
Keywords
Tito Ceci de Sena and Márcio Minto Fabricio
This study proposes a framework for collaborative building information modeling BIM implementation in construction and development companies in the Brazilian architecture…
Abstract
Purpose
This study proposes a framework for collaborative building information modeling BIM implementation in construction and development companies in the Brazilian architecture, engineering and construction (AEC) market. The study addresses aspects concerning BIM collaboration, levels of adoption and maturity, classification of BIM objects and use of tools.
Design/methodology/approach
The study conclusions are based on a bibliographic review and on active participation in a BIM implementation process conducted with two construction and development companies that participated in the study, which allowed examining the practical problems of the elaboration of BIM in various technical specialties, and the proposition of a framework to help overcome these limitations.
Findings
The research identified the importance of adopting standardized methods to develop models, establishing common classifications for objects to allow the use by different stakeholders on 3D, 4D and 5D processes, in a context that information is scattered and, in many cases, divergent across different companies and even different areas from the same company.
Originality/value
The study presents a practical set of methods and tools to be used within a context common to the Brazilian AEC market, on which construction and development companies are responsible for the management of the design and construction phase of a building. The recommendations of the research take into account the shortage of nationwide frameworks and classification standards, so it contributes to filling some gaps of current literature that cover theoretical aspects of guidance documents for BIM implementation but do not detail specific practical applications within a determined context. The limitations of the framework proposed are its focus on establishing context-specific guidelines, which may not be suitable universally.
Details
Keywords
Erik Bergström, Fredrik Karlsson and Rose-Mharie Åhlfeldt
The purpose of this paper is to develop a method for information classification. The proposed method draws on established standards, such as the ISO/IEC 27002 and information…
Abstract
Purpose
The purpose of this paper is to develop a method for information classification. The proposed method draws on established standards, such as the ISO/IEC 27002 and information classification practices. The long-term goal of the method is to decrease the subjective judgement in the implementation of information classification in organisations, which can lead to information security breaches because the information is under- or over-classified.
Design/methodology/approach
The results are based on a design science research approach, implemented as five iterations spanning the years 2013 to 2019.
Findings
The paper presents a method for information classification and the design principles underpinning the method. The empirical demonstration shows that senior and novice information security managers perceive the method as a useful tool for classifying information assets in an organisation.
Research limitations/implications
Existing research has, to a limited extent, provided extensive advice on how to approach information classification in organisations systematically. The method presented in this paper can act as a starting point for further research in this area, aiming at decreasing subjectivity in the information classification process. Additional research is needed to fully validate the proposed method for information classification and its potential to reduce the subjective judgement.
Practical implications
The research contributes to practice by offering a method for information classification. It provides a hands-on-tool for how to implement an information classification process. Besides, this research proves that it is possible to devise a method to support information classification. This is important, because, even if an organisation chooses not to adopt the proposed method, the very fact that this method has proved useful should encourage any similar endeavour.
Originality/value
The proposed method offers a detailed and well-elaborated tool for information classification. The method is generic and adaptable, depending on organisational needs.
Details
Keywords
Yong Ding, Peixiong Huang, Hai Liang, Fang Yuan and Huiyong Wang
Recently, deep learning (DL) has been widely applied in various aspects of human endeavors. However, studies have shown that DL models may also be a primary cause of data leakage…
Abstract
Purpose
Recently, deep learning (DL) has been widely applied in various aspects of human endeavors. However, studies have shown that DL models may also be a primary cause of data leakage, which raises new data privacy concerns. Membership inference attacks (MIAs) are prominent threats to user privacy from DL model training data, as attackers investigate whether specific data samples exist in the training data of a target model. Therefore, the aim of this study is to develop a method for defending against MIAs and protecting data privacy.
Design/methodology/approach
One possible solution is to propose an MIA defense method that involves adjusting the model’s output by mapping the output to a distribution with equal probability density. This approach effectively preserves the accuracy of classification predictions while simultaneously preventing attackers from identifying the training data.
Findings
Experiments demonstrate that the proposed defense method is effective in reducing the classification accuracy of MIAs to below 50%. Because MIAs are viewed as a binary classification model, the proposed method effectively prevents privacy leakage and improves data privacy protection.
Research limitations/implications
The method is only designed to defend against MIA in black-box classification models.
Originality/value
The proposed MIA defense method is effective and has a low cost. Therefore, the method enables us to protect data privacy without incurring significant additional expenses.
Details
Keywords
Charles R. Senteio, Kaitlin E. Montague, Stacy Brody and Kristen B. Matteucci
This paper aims to describe how public librarians can better address complex information needs. First, librarians should classify the degree of complexity of the need by using…
Abstract
Purpose
This paper aims to describe how public librarians can better address complex information needs. First, librarians should classify the degree of complexity of the need by using Warner’s classification model; then they can use Popper’s three world theory to anticipate and respond to complex information needs by following specific steps.
Design/methodology/approach
After examining the information science literature, appropriate models were selected to support public librarians. Our information science scholarship, coupled with our practical experience, informed our search and selection.
Findings
This paper details specific steps that public librarians can take to anticipate and respond to individual information needs. Doing so is imperative as the information needs of the public continue to become increasingly complex.
Originality/value
This paper improves information practice because it offers specific steps to aid public librarians to anticipate and respond to complex information needs. It draws upon an existing model and theoretical framework. This paper also highlights selected examples of how public librarians across the USA have anticipated information needs, and developed partnerships with organizations external to the public library to address complex information needs.
Details
Keywords
Hao Wang and Sanhong Deng
In the era of Big Data, network digital resources are growing rapidly, especially the short-text resources, such as tweets, comments, messages and so on, are showing a vigorous…
Abstract
Purpose
In the era of Big Data, network digital resources are growing rapidly, especially the short-text resources, such as tweets, comments, messages and so on, are showing a vigorous vitality. This study aims to compare the categories discriminative capacity (CDC) of Chinese language fragments with different granularities and to explore and verify feasibility, rationality and effectiveness of the low-granularity feature, such as Chinese characters in Chinese short-text classification (CSTC).
Design/methodology/approach
This study takes discipline classification of journal articles from CSSCI as a simulation environment. On the basis of sorting out the distribution rules of classification features with various granularities, including keywords, terms and characters, the classification effects accessed by the SVM algorithm are comprehensively compared and evaluated from three angles of using the same experiment samples, testing before and after feature optimization, and introducing external data.
Findings
The granularity of a classification feature has an important impact on CSTC. In general, the larger the granularity is, the better the classification result is, and vice versa. However, a low-granularity feature is also feasible, and its CDC could be improved by reasonable weight setting, even exceeding a high-granularity feature if synthetically considering classification precision, computational complexity and text coverage.
Originality/value
This is the first study to propose that Chinese characters are more suitable as descriptive features in CSTC than terms and keywords and to demonstrate that CDC of Chinese character features could be strengthened by mixing frequency and position as weight.
Details
Keywords
Luís Jacques de Sousa, João Poças Martins, Luís Sanhudo and João Santos Baptista
This study aims to review recent advances towards the implementation of ANN and NLP applications during the budgeting phase of the construction process. During this phase…
Abstract
Purpose
This study aims to review recent advances towards the implementation of ANN and NLP applications during the budgeting phase of the construction process. During this phase, construction companies must assess the scope of each task and map the client’s expectations to an internal database of tasks, resources and costs. Quantity surveyors carry out this assessment manually with little to no computer aid, within very austere time constraints, even though these results determine the company’s bid quality and are contractually binding.
Design/methodology/approach
This paper seeks to compile applications of machine learning (ML) and natural language processing in the architectural engineering and construction sector to find which methodologies can assist this assessment. The paper carries out a systematic literature review, following the preferred reporting items for systematic reviews and meta-analyses guidelines, to survey the main scientific contributions within the topic of text classification (TC) for budgeting in construction.
Findings
This work concludes that it is necessary to develop data sets that represent the variety of tasks in construction, achieve higher accuracy algorithms, widen the scope of their application and reduce the need for expert validation of the results. Although full automation is not within reach in the short term, TC algorithms can provide helpful support tools.
Originality/value
Given the increasing interest in ML for construction and recent developments, the findings disclosed in this paper contribute to the body of knowledge, provide a more automated perspective on budgeting in construction and break ground for further implementation of text-based ML in budgeting for construction.
Details
Keywords
This study aims to identify problems connected to information classification in theory and to put those problems into the context of experiences from practice.
Abstract
Purpose
This study aims to identify problems connected to information classification in theory and to put those problems into the context of experiences from practice.
Design/methodology/approach
Five themes describing problems are discussed in an empirical study, having informants represented from both a public and a private sector organization.
Findings
The reasons for problems to occur in information classification are exemplified by the informants’ experiences. The study concludes with directions for future research.
Originality/value
Information classification sustains the basics of security measures. The human–organizational challenges are evident in the activities but have received little attention in research.
Details
Keywords
Jie Sun, Hui Li, Pei-Chann Chang and Qing-Hua Huang
Previous researches on credit scoring mainly focussed on static modeling on panel sample data set in a certain period of time, and did not pay enough attention on dynamic…
Abstract
Purpose
Previous researches on credit scoring mainly focussed on static modeling on panel sample data set in a certain period of time, and did not pay enough attention on dynamic incremental modeling. The purpose of this paper is to address the integration of branch and bound algorithm with incremental support vector machine (SVM) ensemble to make dynamic modeling of credit scoring.
Design/methodology/approach
This new model hybridizes support vectors of old data with incremental financial data of corporate in the process of dynamic ensemble modeling based on bagged SVM. In the incremental stage, multiple base SVM models are dynamically adjusted according to bagged new updated information for credit scoring. These updated base models are further combined to generate a dynamic credit scoring. In the empirical experiment, the new method was compared with the traditional model of non-incremental SVM ensemble for credit scoring.
Findings
The results show that the new model is able to continuously and dynamically adjust credit scoring according to corporate incremental information, which helps produce better evaluation ability than the traditional model.
Originality/value
This research pioneered on dynamic modeling for credit scoring with incremental SVM ensemble. As time pasts, new incremental samples will be combined with support vectors of old samples to construct SVM ensemble credit scoring model. The incremental model will continuously adjust itself to keep good evaluation performance.
Details
Keywords
Fernanda Gonzalez-Lopez and Guillermo Bustos
The purpose of this paper is to describe the current state of the research field of business process architecture (BPA) and its design methodologies.
Abstract
Purpose
The purpose of this paper is to describe the current state of the research field of business process architecture (BPA) and its design methodologies.
Design/methodology/approach
A systematic literature review (SLR) was conducted using meta- and content-based perspectives.
Findings
From over 6,000 candidate studies, 89 were selected. A fifth of these primary works corresponded to BPA design methodologies. Though the BPA research field remains in an early stage of development, it bears promising growth potential. Regarding BPA design methodologies, the following aspects susceptible for further research were detected: identification and modeling of business process relationships; specification of inputs; standardization of models, notations and tool support; consideration of managerial concerns; integration of knowledge from other areas; and validation of methodological and product quality aspects.
Research limitations/implications
The main limitation of the work lies in not being fully reproducible due to the fixed number of data sources and their digital nature, together with subjective decisions in work selection, data extraction and data analysis.
Originality/value
To the best of the authors’ knowledge no study has yet analyzed the BPA research field by means of an SLR. This study will benefit practitioners and research groups working on this topic by allowing them to get a rigorous overview of the BPA research field with an emphasis on available BPA design methodologies, and become aware of research gaps within the BPA field to position further research.
Details