Search results

1 – 10 of over 3000
Open Access
Article
Publication date: 16 August 2021

Jan-Halvard Bergquist, Samantha Tinet and Shang Gao

The purpose of this study is to create an information classification model that is tailored to suit the specific needs of public sector organizations in Sweden.

2154

Abstract

Purpose

The purpose of this study is to create an information classification model that is tailored to suit the specific needs of public sector organizations in Sweden.

Design/methodology/approach

To address the purpose of this research, a case study in a Swedish municipality was conducted. Data was collected through a mixture of techniques such as literature, document and website review. Empirical data was collected through interviews with 11 employees working within 7 different sections of the municipality.

Findings

This study resulted in an information classification model that is tailored to the specific needs of Swedish municipalities. In addition, a set of steps for tailoring an information classification model to suit a specific public organization are recommended. The findings also indicate that for a successful information classification it is necessary to educate the employees about the basics of information security and classification and create an understandable and unified information security language.

Practical implications

This study also highlights that to have a tailored information classification model, it is imperative to understand the value of information and what kind of consequences a violation of established information security principles could have through the perspectives of the employees.

Originality/value

It is the first of its kind in tailoring an information classification model to the specific needs of a Swedish municipality. The model provided by this study can be used as a tool to facilitate a common ground for classifying information within all Swedish municipalities, thereby contributing the first step toward a Swedish municipal model for information classification.

Open Access
Article
Publication date: 10 April 2023

Simon Andersson

This study aims to identify problems connected to information classification in theory and to put those problems into the context of experiences from practice.

1264

Abstract

Purpose

This study aims to identify problems connected to information classification in theory and to put those problems into the context of experiences from practice.

Design/methodology/approach

Five themes describing problems are discussed in an empirical study, having informants represented from both a public and a private sector organization.

Findings

The reasons for problems to occur in information classification are exemplified by the informants’ experiences. The study concludes with directions for future research.

Originality/value

Information classification sustains the basics of security measures. The human–organizational challenges are evident in the activities but have received little attention in research.

Details

Information & Computer Security, vol. 31 no. 4
Type: Research Article
ISSN: 2056-4961

Keywords

Open Access
Article
Publication date: 13 December 2021

Jutta Haider, Veronica Johansson and Björn Hammarfelt

The article introduces selected theoretical approaches to time and temporality relevant to the field of library and information science, and it briefly introduces the papers…

2220

Abstract

Purpose

The article introduces selected theoretical approaches to time and temporality relevant to the field of library and information science, and it briefly introduces the papers gathered in this special issue. A number of issues that could potentially be followed in future research are presented.

Design/methodology/approach

The authors review a selection of theoretical and empirical approaches to the study of time that originate in or are of particular relevance to library and information science. Four main themes are identified: (1) information as object in temporal perspectives; (2) time and information as tools of power and control; (3) time in society; and (4) experiencing and practicing time.

Findings

The paper advocates a thorough engagement with how time and temporality shape notions of information more broadly. This includes, for example, paying attention to how various dimensions of the late-modern time regime of acceleration feed into the ways in which information is operationalised, how information work is commodified, and how hierarchies of information are established; paying attention to the changing temporal dynamics that networked information systems imply for our understanding of documents or of memory institutions; or how external events such as social and natural crises quickly alter modes, speed, and forms of data production and use, in areas as diverse as information practices, policy, management, representation, and organisation, amongst others.

Originality/value

By foregrounding temporal perspectives in library and information science, the authors advocate dialogue with important perspectives on time that come from other fields. Rather than just including such perspectives in library and information science, however, the authors find that the focus on information and documents that the library and information science field contributes has great potential to advance the understanding of how notions and experiences of time shape late-modern societies and individuals.

Open Access
Article
Publication date: 20 March 2017

Tristan Gerrish, Kirti Ruikar, Malcolm Cook, Mark Johnson and Mark Phillip

The purpose of this paper is to present a review of the implications building information modelling (BIM) is having on the building energy modelling (BEM) and design of buildings…

9621

Abstract

Purpose

The purpose of this paper is to present a review of the implications building information modelling (BIM) is having on the building energy modelling (BEM) and design of buildings. It addresses the issues surrounding exchange of information throughout the design process, and where BIM may be useful in contributing to effective design progression and information availability.

Design/methodology/approach

Through review of current design procedures and examination of the concurrency between architectural and thermophysical design modelling, a procedure for information generation relevant to design stakeholders is created, and applied to a high-performance building project currently under development.

Findings

The extents of information key to the successful design of a buildings energy performance in relation to its architectural objectives are given, with indication of the level of development required at each stage of the design process.

Practical implications

BIM offers an extensible medium for parametric information storage, and its implementation in design development offers the capability to include BEM parameter-integrated construction information. The extent of information required for accurate BEM at stages of a building’s design is key to understanding how best to record performance information in a BIM environment.

Originality/value

This paper contributes to the discussion around the integration of concurrent design procedures and a common data environment. It presents a framework for the creation and dissemination of information during design, exemplifies this on a real building project and evaluates the barriers experienced in successful implementation.

Details

Engineering, Construction and Architectural Management, vol. 24 no. 2
Type: Research Article
ISSN: 0969-9988

Keywords

Open Access
Article
Publication date: 30 September 2021

Sung-Ho Shin and Soo-Yong Shin

Global value changes continued to expand until the late 2000s. On the other hand, regional value chains have formed around major regional hubs due to the expansion of domestic…

Abstract

Global value changes continued to expand until the late 2000s. On the other hand, regional value chains have formed around major regional hubs due to the expansion of domestic demand in emerging economies, such as China, and strengthened trade protectionism since the global financial crisis. Such changes lead to the reorganisation of value chains, focusing on domestic markets (reshoring) or neighbouring countries (nearshoring). In particular, the importance of supply chain risk management has been highlighted following disruptions to the supply network due to the COVID-19 outbreak in December 2019. In this regard, major countries such as the USA and the EU are rapidly shifting to regional value chains for stable and sustainable production, rather than primarily aiming for production efficiency targeted at reducing costs. Industries in particular are more exposed to such supply chain risks under the existing structure and it now has become extremely important for businesses to take reaction to such risks. This is especially important for major industries in a country such as automobile or semiconductor manufacturing industries in South Korea. The aim of this study, therefore, is to establish the basis for the simultaneous growth of ports and linked industries by examining the existing structure of the global value chain for the automotive industry, which has a strong presence in South Korea’s domestic economy. In this regard, this research carries out a supply chain analysis focusing on the imports and exports of automotive parts. It also analyses the current structural risks and suggests risk management measures to secure a stable supply chain.

Details

Journal of International Logistics and Trade, vol. 19 no. 3
Type: Research Article
ISSN: 1738-2122

Keywords

Open Access
Article
Publication date: 13 February 2023

Elham Rostami, Fredrik Karlsson and Shang Gao

This paper aims to propose a conceptual model of policy components for software that supports modularizing and tailoring of information security policies (ISPs).

1205

Abstract

Purpose

This paper aims to propose a conceptual model of policy components for software that supports modularizing and tailoring of information security policies (ISPs).

Design/methodology/approach

This study used a design science research approach, drawing on design knowledge from the field of situational method engineering. The conceptual model was developed as a unified modeling language class diagram using existing ISPs from public agencies in Sweden.

Findings

This study’s demonstration as proof of concept indicates that the conceptual model can be used to create free-standing modules that provide guidance about information security in relation to a specific work task and that these modules can be used across multiple tailored ISPs. Thus, the model can be considered as a step toward developing software to tailor ISPs.

Research limitations/implications

The proposed conceptual model bears several short- and long-term implications for research. In the short term, the model can act as a foundation for developing software to design tailored ISPs. In the long term, having software that enables tailorable ISPs will allow researchers to do new types of studies, such as evaluating the software's effectiveness in the ISP development process.

Practical implications

Practitioners can use the model to develop software that assist information security managers in designing tailored ISPs. Such a tool can offer the opportunity for information security managers to design more purposeful ISPs.

Originality/value

The proposed model offers a detailed and well-elaborated starting point for developing software that supports modularizing and tailoring of ISPs.

Details

Information & Computer Security, vol. 31 no. 3
Type: Research Article
ISSN: 2056-4961

Keywords

Open Access
Article
Publication date: 16 December 2019

Florian Fahrenbach, Kate Revoredo and Flavia Maria Santoro

This paper aims to introduce an information and communication technology (ICT) artifact that uses text mining to support the innovative and standardized assessment of professional…

1236

Abstract

Purpose

This paper aims to introduce an information and communication technology (ICT) artifact that uses text mining to support the innovative and standardized assessment of professional competences within the validation of prior learning (VPL). Assessment means comparing identified and documented professional competences against a standard or reference point. The designed artifact is evaluated by matching a set of curriculum vitae (CV) scraped from LinkedIn against a comprehensive model of professional competence.

Design/methodology/approach

A design science approach informed the development and evaluation of the ICT artifact presented in this paper.

Findings

A proof of concept shows that the ICT artifact can support assessors within the validation of prior learning procedure. Rather the output of such an ICT artifact can be used to structure documentation in the validation process.

Research limitations/implications

Evaluating the artifact shows that ICT support to assess documented learning outcomes is a promising endeavor but remains a challenge. Further research should work on standardized ways to document professional competences, ICT artifacts capture the semantic content of documents, and refine ontologies of theoretical models of professional competences.

Practical implications

Text mining methods to assess professional competences rely on large bodies of textual data, and thus a thoroughly built and large portfolio is necessary as input for this ICT artifact.

Originality/value

Following the recent call of European policymakers to develop standardized and ICT-based approaches for the assessment of professional competences, an ICT artifact that supports the automatized assessment of professional competences within the validation of prior learning is designed and evaluated.

Details

European Journal of Training and Development, vol. 44 no. 2/3
Type: Research Article
ISSN: 2046-9012

Keywords

Open Access
Article
Publication date: 2 April 2024

Koraljka Golub, Osma Suominen, Ahmed Taiye Mohammed, Harriet Aagaard and Olof Osterman

In order to estimate the value of semi-automated subject indexing in operative library catalogues, the study aimed to investigate five different automated implementations of an…

Abstract

Purpose

In order to estimate the value of semi-automated subject indexing in operative library catalogues, the study aimed to investigate five different automated implementations of an open source software package on a large set of Swedish union catalogue metadata records, with Dewey Decimal Classification (DDC) as the target classification system. It also aimed to contribute to the body of research on aboutness and related challenges in automated subject indexing and evaluation.

Design/methodology/approach

On a sample of over 230,000 records with close to 12,000 distinct DDC classes, an open source tool Annif, developed by the National Library of Finland, was applied in the following implementations: lexical algorithm, support vector classifier, fastText, Omikuji Bonsai and an ensemble approach combing the former four. A qualitative study involving two senior catalogue librarians and three students of library and information studies was also conducted to investigate the value and inter-rater agreement of automatically assigned classes, on a sample of 60 records.

Findings

The best results were achieved using the ensemble approach that achieved 66.82% accuracy on the three-digit DDC classification task. The qualitative study confirmed earlier studies reporting low inter-rater agreement but also pointed to the potential value of automatically assigned classes as additional access points in information retrieval.

Originality/value

The paper presents an extensive study of automated classification in an operative library catalogue, accompanied by a qualitative study of automated classes. It demonstrates the value of applying semi-automated indexing in operative information retrieval systems.

Details

Journal of Documentation, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0022-0418

Keywords

Open Access
Article
Publication date: 22 November 2022

Kedong Yin, Yun Cao, Shiwei Zhou and Xinman Lv

The purposes of this research are to study the theory and method of multi-attribute index system design and establish a set of systematic, standardized, scientific index systems…

Abstract

Purpose

The purposes of this research are to study the theory and method of multi-attribute index system design and establish a set of systematic, standardized, scientific index systems for the design optimization and inspection process. The research may form the basis for a rational, comprehensive evaluation and provide the most effective way of improving the quality of management decision-making. It is of practical significance to improve the rationality and reliability of the index system and provide standardized, scientific reference standards and theoretical guidance for the design and construction of the index system.

Design/methodology/approach

Using modern methods such as complex networks and machine learning, a system for the quality diagnosis of index data and the classification and stratification of index systems is designed. This guarantees the quality of the index data, realizes the scientific classification and stratification of the index system, reduces the subjectivity and randomness of the design of the index system, enhances its objectivity and rationality and lays a solid foundation for the optimal design of the index system.

Findings

Based on the ideas of statistics, system theory, machine learning and data mining, the focus in the present research is on “data quality diagnosis” and “index classification and stratification” and clarifying the classification standards and data quality characteristics of index data; a data-quality diagnosis system of “data review – data cleaning – data conversion – data inspection” is established. Using a decision tree, explanatory structural model, cluster analysis, K-means clustering and other methods, classification and hierarchical method system of indicators is designed to reduce the redundancy of indicator data and improve the quality of the data used. Finally, the scientific and standardized classification and hierarchical design of the index system can be realized.

Originality/value

The innovative contributions and research value of the paper are reflected in three aspects. First, a method system for index data quality diagnosis is designed, and multi-source data fusion technology is adopted to ensure the quality of multi-source, heterogeneous and mixed-frequency data of the index system. The second is to design a systematic quality-inspection process for missing data based on the systematic thinking of the whole and the individual. Aiming at the accuracy, reliability, and feasibility of the patched data, a quality-inspection method of patched data based on inversion thought and a unified representation method of data fusion based on a tensor model are proposed. The third is to use the modern method of unsupervised learning to classify and stratify the index system, which reduces the subjectivity and randomness of the design of the index system and enhances its objectivity and rationality.

Details

Marine Economics and Management, vol. 5 no. 2
Type: Research Article
ISSN: 2516-158X

Keywords

Open Access
Article
Publication date: 8 December 2020

Matjaž Kragelj and Mirjana Kljajić Borštnar

The purpose of this study is to develop a model for automated classification of old digitised texts to the Universal Decimal Classification (UDC), using machine-learning methods.

2906

Abstract

Purpose

The purpose of this study is to develop a model for automated classification of old digitised texts to the Universal Decimal Classification (UDC), using machine-learning methods.

Design/methodology/approach

The general research approach is inherent to design science research, in which the problem of UDC assignment of the old, digitised texts is addressed by developing a machine-learning classification model. A corpus of 70,000 scholarly texts, fully bibliographically processed by librarians, was used to train and test the model, which was used for classification of old texts on a corpus of 200,000 items. Human experts evaluated the performance of the model.

Findings

Results suggest that machine-learning models can correctly assign the UDC at some level for almost any scholarly text. Furthermore, the model can be recommended for the UDC assignment of older texts. Ten librarians corroborated this on 150 randomly selected texts.

Research limitations/implications

The main limitations of this study were unavailability of labelled older texts and the limited availability of librarians.

Practical implications

The classification model can provide a recommendation to the librarians during their classification work; furthermore, it can be implemented as an add-on to full-text search in the library databases.

Social implications

The proposed methodology supports librarians by recommending UDC classifiers, thus saving time in their daily work. By automatically classifying older texts, digital libraries can provide a better user experience by enabling structured searches. These contribute to making knowledge more widely available and useable.

Originality/value

These findings contribute to the field of automated classification of bibliographical information with the usage of full texts, especially in cases in which the texts are old, unstructured and in which archaic language and vocabulary are used.

Details

Journal of Documentation, vol. 77 no. 3
Type: Research Article
ISSN: 0022-0418

Keywords

1 – 10 of over 3000