Search results

1 – 10 of 103
Open Access
Article
Publication date: 4 July 2023

Joacim Hansson

In this article, the author discusses works from the French Documentation Movement in the 1940s and 1950s with regard to how it formulates bibliographic classification systems as…

Abstract

Purpose

In this article, the author discusses works from the French Documentation Movement in the 1940s and 1950s with regard to how it formulates bibliographic classification systems as documents. Significant writings by Suzanne Briet, Éric de Grolier and Robert Pagès are analyzed in the light of current document-theoretical concepts and discussions.

Design/methodology/approach

Conceptual analysis.

Findings

The French Documentation Movement provided a rich intellectual environment in the late 1940s and early 1950s, resulting in original works on documents and the ways these may be represented bibliographically. These works display a variety of approaches from object-oriented description to notational concept-synthesis, and definitions of classification systems as isomorph documents at the center of politically informed critique of modern society.

Originality/value

The article brings together historical and conceptual elements in the analysis which have not previously been combined in Library and Information Science literature. In the analysis, the article discusses significant contributions to classification and document theory that hitherto have eluded attention from the wider international Library and Information Science research community. Through this, the article contributes to the currently ongoing conceptual discussion on documents and documentality.

Details

Journal of Documentation, vol. 80 no. 3
Type: Research Article
ISSN: 0022-0418

Keywords

Open Access
Article
Publication date: 23 January 2024

Wang Zengqing, Zheng Yu Xie and Jiang Yiling

With the rapid development of railway-intelligent video technology, scene understanding is becoming more and more important. Semantic segmentation is a major part of scene…

Abstract

Purpose

With the rapid development of railway-intelligent video technology, scene understanding is becoming more and more important. Semantic segmentation is a major part of scene understanding. There is an urgent need for an algorithm with high accuracy and real-time to meet the current railway requirements for railway identification. In response to this demand, this paper aims to explore a variety of models, accurately locate and segment important railway signs based on the improved SegNeXt algorithm, supplement the railway safety protection system and improve the intelligent level of railway safety protection.

Design/methodology/approach

This paper studies the performance of existing models on RailSem19 and explores the defects of each model through performance so as to further explore an algorithm model dedicated to railway semantic segmentation. In this paper, the authors explore the optimal solution of SegNeXt model for railway scenes and achieve the purpose of this paper by improving the encoder and decoder structure.

Findings

This paper proposes an improved SegNeXt algorithm: first, it explores the performance of various models on railways, studies the problems of semantic segmentation on railways and then analyzes the specific problems. On the basis of retaining the original excellent MSCAN encoder of SegNeXt, multiscale information fusion is used to further extract detailed features such as multihead attention and mask, solving the problem of inaccurate segmentation of current objects by the original SegNeXt algorithm. The improved algorithm is of great significance for the segmentation and recognition of railway signs.

Research limitations/implications

The model constructed in this paper has advantages in the feature segmentation of distant small objects, but it still has the problem of segmentation fracture for the railway, which is not completely segmented. In addition, in the throat area, due to the complexity of the railway, the segmentation results are not accurate.

Social implications

The identification and segmentation of railway signs based on the improved SegNeXt algorithm in this paper is of great significance for the understanding of existing railway scenes, which can greatly improve the classification and recognition ability of railway small object features and can greatly improve the degree of railway security.

Originality/value

This article introduces an enhanced version of the SegNeXt algorithm, which aims to improve the accuracy of semantic segmentation on railways. The study begins by investigating the performance of different models in railway scenarios and identifying the challenges associated with semantic segmentation on this particular domain. To address these challenges, the proposed approach builds upon the strong foundation of the original SegNeXt algorithm, leveraging techniques such as multi-scale information fusion, multi-head attention, and masking to extract finer details and enhance feature representation. By doing so, the improved algorithm effectively resolves the issue of inaccurate object segmentation encountered in the original SegNeXt algorithm. This advancement holds significant importance for the accurate recognition and segmentation of railway signage.

Details

Smart and Resilient Transportation, vol. 6 no. 1
Type: Research Article
ISSN: 2632-0487

Keywords

Open Access
Article
Publication date: 13 September 2022

Oliver Disney, Mattias Roupé, Mikael Johansson and Alessio Domenico Leto

Building information modeling (BIM) is mostly limited to the design phase where two parallel processes exist, i.e. creating 2D-drawings and BIM. Towards the end of the design…

4376

Abstract

Purpose

Building information modeling (BIM) is mostly limited to the design phase where two parallel processes exist, i.e. creating 2D-drawings and BIM. Towards the end of the design process, BIM becomes obsolete as focus shifts to producing static 2D-drawings, which leads to a lack of trust in BIM. In Scandinavia, a concept known as Total BIM has emerged, which is a novel “all-in” approach where BIM is the single source of information throughout the project. This paper's purpose is to investigate the overall concept and holistic approach of a Total BIM project to support implementation and strategy work connected to BIM.

Design/methodology/approach

Qualitative data were collected through eight semi-structured interviews with digitalization leaders from the case study project. Findings were analyzed using a holistic framework to BIM implementation.

Findings

The Total BIM concept was contingent on the strong interdependences between commonly found isolated BIM uses. Four main success factors were identified, production-oriented BIM as the main contractual and legally binding construction document, cloud-based model management, user-friendly on-site mobile BIM software and strong leadership.

Originality/value

A unique case is studied where BIM is used throughout all project phases as a single source of information and communication platform. No 2D paper drawings were used on-site and the Total BIM case study highlights the importance of a new digitalized construction process.

Details

Smart and Sustainable Built Environment, vol. 13 no. 3
Type: Research Article
ISSN: 2046-6099

Keywords

Open Access
Article
Publication date: 16 January 2024

Jani Koskinen, Kai Kristian Kimppa, Janne Lahtiranta and Sami Hyrynsalmi

The competition in the academe has always been tough, but today, the academe seems to be more like an industry than an academic community as academics are evaluated through…

Abstract

Purpose

The competition in the academe has always been tough, but today, the academe seems to be more like an industry than an academic community as academics are evaluated through quantified and economic means.

Design/methodology/approach

This article leans on Heidegger’s thoughts on the essence of technology and his ontological view on being to show the dangers that lie in this quantification of researchers and research.

Findings

Despite the benefits that information systems (ISs) offer to people and research, it seems that technology has made it possible to objectify researchers and research. This has a negative impact on the academe and should thus be looked into especially by the IS field, which should note the problems that exist in its core. This phenomenon of quantified academics is clearly visible at academic quantification sites, where academics are evaluated using metrics that count their output. It seems that the essence of technology has disturbed the way research is valued by emphasising its quantifiable aspects. The study claims that it is important to look for other ways to evaluate researchers rather than trying to maximise research production, which has led to the flooding of articles that few have the time or interest to read.

Originality/value

This paper offers new insights into the current phenomenon of quantification of academics and underlines the need for critical changes if in order to achieve the academic culture that is desirable for future academics.

Details

Information Technology & People, vol. 37 no. 8
Type: Research Article
ISSN: 0959-3845

Keywords

Open Access
Article
Publication date: 26 April 2024

Adela Sobotkova, Ross Deans Kristensen-McLachlan, Orla Mallon and Shawn Adrian Ross

This paper provides practical advice for archaeologists and heritage specialists wishing to use ML approaches to identify archaeological features in high-resolution satellite…

Abstract

Purpose

This paper provides practical advice for archaeologists and heritage specialists wishing to use ML approaches to identify archaeological features in high-resolution satellite imagery (or other remotely sensed data sources). We seek to balance the disproportionately optimistic literature related to the application of ML to archaeological prospection through a discussion of limitations, challenges and other difficulties. We further seek to raise awareness among researchers of the time, effort, expertise and resources necessary to implement ML successfully, so that they can make an informed choice between ML and manual inspection approaches.

Design/methodology/approach

Automated object detection has been the holy grail of archaeological remote sensing for the last two decades. Machine learning (ML) models have proven able to detect uniform features across a consistent background, but more variegated imagery remains a challenge. We set out to detect burial mounds in satellite imagery from a diverse landscape in Central Bulgaria using a pre-trained Convolutional Neural Network (CNN) plus additional but low-touch training to improve performance. Training was accomplished using MOUND/NOT MOUND cutouts, and the model assessed arbitrary tiles of the same size from the image. Results were assessed using field data.

Findings

Validation of results against field data showed that self-reported success rates were misleadingly high, and that the model was misidentifying most features. Setting an identification threshold at 60% probability, and noting that we used an approach where the CNN assessed tiles of a fixed size, tile-based false negative rates were 95–96%, false positive rates were 87–95% of tagged tiles, while true positives were only 5–13%. Counterintuitively, the model provided with training data selected for highly visible mounds (rather than all mounds) performed worse. Development of the model, meanwhile, required approximately 135 person-hours of work.

Research limitations/implications

Our attempt to deploy a pre-trained CNN demonstrates the limitations of this approach when it is used to detect varied features of different sizes within a heterogeneous landscape that contains confounding natural and modern features, such as roads, forests and field boundaries. The model has detected incidental features rather than the mounds themselves, making external validation with field data an essential part of CNN workflows. Correcting the model would require refining the training data as well as adopting different approaches to model choice and execution, raising the computational requirements beyond the level of most cultural heritage practitioners.

Practical implications

Improving the pre-trained model’s performance would require considerable time and resources, on top of the time already invested. The degree of manual intervention required – particularly around the subsetting and annotation of training data – is so significant that it raises the question of whether it would be more efficient to identify all of the mounds manually, either through brute-force inspection by experts or by crowdsourcing the analysis to trained – or even untrained – volunteers. Researchers and heritage specialists seeking efficient methods for extracting features from remotely sensed data should weigh the costs and benefits of ML versus manual approaches carefully.

Social implications

Our literature review indicates that use of artificial intelligence (AI) and ML approaches to archaeological prospection have grown exponentially in the past decade, approaching adoption levels associated with “crossing the chasm” from innovators and early adopters to the majority of researchers. The literature itself, however, is overwhelmingly positive, reflecting some combination of publication bias and a rhetoric of unconditional success. This paper presents the failure of a good-faith attempt to utilise these approaches as a counterbalance and cautionary tale to potential adopters of the technology. Early-majority adopters may find ML difficult to implement effectively in real-life scenarios.

Originality/value

Unlike many high-profile reports from well-funded projects, our paper represents a serious but modestly resourced attempt to apply an ML approach to archaeological remote sensing, using techniques like transfer learning that are promoted as solutions to time and cost problems associated with, e.g. annotating and manipulating training data. While the majority of articles uncritically promote ML, or only discuss how challenges were overcome, our paper investigates how – despite reasonable self-reported scores – the model failed to locate the target features when compared to field data. We also present time, expertise and resourcing requirements, a rarity in ML-for-archaeology publications.

Details

Journal of Documentation, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0022-0418

Keywords

Open Access
Article
Publication date: 15 January 2024

Christine Prince, Nessrine Omrani and Francesco Schiavone

Research on online user privacy shows that empirical evidence on how privacy literacy relates to users' information privacy empowerment is missing. To fill this gap, this paper…

1341

Abstract

Purpose

Research on online user privacy shows that empirical evidence on how privacy literacy relates to users' information privacy empowerment is missing. To fill this gap, this paper investigated the respective influence of two primary dimensions of online privacy literacy – namely declarative and procedural knowledge – on online users' information privacy empowerment.

Design/methodology/approach

An empirical analysis is conducted using a dataset collected in Europe. This survey was conducted in 2019 among 27,524 representative respondents of the European population.

Findings

The main results show that users' procedural knowledge is positively linked to users' privacy empowerment. The relationship between users' declarative knowledge and users' privacy empowerment is partially supported. While greater awareness about firms and organizations practices in terms of data collections and further uses conditions was found to be significantly associated with increased users' privacy empowerment, unpredictably, results revealed that the awareness about the GDPR and user’s privacy empowerment are negatively associated. The empirical findings reveal also that greater online privacy literacy is associated with heightened users' information privacy empowerment.

Originality/value

While few advanced studies made systematic efforts to measure changes occurred on websites since the GDPR enforcement, it remains unclear, however, how individuals perceive, understand and apply the GDPR rights/guarantees and their likelihood to strengthen users' information privacy control. Therefore, this paper contributes empirically to understanding how online users' privacy literacy shaped by both users' declarative and procedural knowledge is likely to affect users' information privacy empowerment. The study empirically investigates the effectiveness of the GDPR in raising users' information privacy empowerment from user-based perspective. Results stress the importance of greater transparency of data tracking and processing decisions made by online businesses and services to strengthen users' control over information privacy. Study findings also put emphasis on the crucial need for more educational efforts to raise users' awareness about the GDPR rights/guarantees related to data protection. Empirical findings also show that users who are more likely to adopt self-protective approaches to reinforce personal data privacy are more likely to perceive greater control over personal data. A broad implication of this finding for practitioners and E-businesses stresses the need for empowering users with adequate privacy protection tools to ensure more confidential transactions.

Details

Information Technology & People, vol. 37 no. 8
Type: Research Article
ISSN: 0959-3845

Keywords

Open Access
Article
Publication date: 19 December 2023

Nadia Arshad, Rotem Shneor and Adele Berndt

Crowdfunding is an increasingly popular channel for project fundraising for entrepreneurial ventures. Such efforts require fundraisers to develop and manage a crowdfunding…

Abstract

Purpose

Crowdfunding is an increasingly popular channel for project fundraising for entrepreneurial ventures. Such efforts require fundraisers to develop and manage a crowdfunding campaign over a period of time and several stages. Thus, the authors aim to identify the stages fundraisers go through in their crowdfunding campaign process and how their engagement evolves throughout this process.

Design/methodology/approach

Following a multiple case study research design analysing six successful campaigns, the current study suggests a taxonomy of stages the fundraisers go through in their crowdfunding campaign management process while identifying the types of engagement displayed and their relative intensity at each of these stages.

Findings

The study proposes a five-stage process framework (pre-launch, launch, mid-campaign, conclusion and post-campaign), accompanied by a series of propositions outlining the relative intensity of different types of engagement throughout this process. The authors show that engagement levels appear with high intensity at pre-launch, and to a lesser degree also at the post-launch stage while showing low intensity at the stages in between them. More specifically, cognitive and behavioural engagement are most prominent at the pre- and post-launch stages. Emotional engagement is highest during the launch, mid-launch and conclusion stages. And social engagement maintains moderate levels of intensity throughout the process.

Originality/value

This study focuses on the campaign process using engagement theory, thus identifying the differing engagement patterns throughout the dynamic crowdfunding campaign management process, not just in one part.

Details

International Journal of Entrepreneurial Behavior & Research, vol. 30 no. 11
Type: Research Article
ISSN: 1355-2554

Keywords

Open Access
Article
Publication date: 14 May 2024

Klára Rybenská, Lenka Knapová, Kamil Janiš, Jitka Kühnová, Richard Cimler and Steriani Elavsky

A wide gap exists between the innovation and development of self-monitoring, analysis and reporting technology (SMART) technologies and the actual adoption by older adults or…

Abstract

Purpose

A wide gap exists between the innovation and development of self-monitoring, analysis and reporting technology (SMART) technologies and the actual adoption by older adults or those caring for them. This paper aims to increase awareness of available technologies and describes their suitability for older adults with different needs. SMART technologies are intelligent devices and systems that enable autonomous monitoring of their status, data analysis or direct feedback provision.

Design/methodology/approach

This is a scoping review of SMART technologies used and marketed to older adults or for providing care.

Findings

Five categories of SMART technologies were identified: (1) wearable technologies and smart tools of daily living; (2) noninvasive/unobtrusive technology (i.e. passive technologies monitoring the environment, health and behavior); (3) complex SMART systems; (4) interactive technologies; (5) assistive and rehabilitation devices. Technologies were then linked with needs related to everyday practical tasks (mainly applications supporting autonomous, independent living), social and emotional support, health monitoring/managing and compensatory assistance rehabilitation.

Research limitations/implications

When developing, testing or implementing technologies for older adults, researchers should clearly identify concrete needs these technologies help meet to underscore their usefulness.

Practical implications

Older adults and caregivers should weigh the pros and cons of different technologies and consider the key needs of older adults before investing in any tech solution.

Social implications

SMART technologies meeting older adult needs help support both independent, autonomous life for as long as possible as well as aiding in the transition to assisted or institutionalized care.

Originality/value

This is the first review to explicitly link existing SMART technologies with the concrete needs of older adults, serving as a useful guide for both older adults and caregivers in terms of available technology solutions.

Details

Journal of Enabling Technologies, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 2398-6263

Keywords

Open Access
Article
Publication date: 23 February 2024

Elin K. Funck, Kirsi-Mari Kallio and Tomi J. Kallio

This paper aims to investigate the process by which performative technologies (PTs), in this case accreditation work in a business school, take form and how humans engage in…

Abstract

Purpose

This paper aims to investigate the process by which performative technologies (PTs), in this case accreditation work in a business school, take form and how humans engage in making up such practices. It studies how academics come to accept and even identify with the quantitative representations of themselves in a translation process.

Design/methodology/approach

The research involved a longitudinal, self-ethnographic case study that followed the accreditation process of one Nordic business school from 2015 to 2021.

Findings

The findings show how the PT pushed for different engagements in various phases of the translation process. Early in the translation process, the PT promoted engagement because of self-realization and the ability for academics to proactively influence the prospective competitive milieu. However, as academic qualities became fabricated into numbers, the PT was able to request compliance, but also to induce self-reflection and self-discipline by forcing academics to compare themselves to set qualities and measures.

Originality/value

The paper advances the field by linking five phases of the translation process, problematization, fabrication, materialization, commensuration and stabilization, to a discussion of why academics come to accept and identify with the quantitative representations of themselves. The results highlight that the materialization phase appears to be the critical point at which calculative practices become persuasive and start influencing academics’ thoughts and actions.

Details

Journal of Accounting & Organizational Change, vol. 20 no. 6
Type: Research Article
ISSN: 1832-5912

Keywords

Open Access
Article
Publication date: 27 February 2024

Maria Pia Paganelli

Is there a secret recipe for economic growth?

Abstract

Purpose

Is there a secret recipe for economic growth?

Design/methodology/approach

No, there is no recipe, but we can extrapolate some pieces of advice from Adam Smith.

Findings

An economy can leave behind its “dull” stagnant state and grow when its markets expand, when the productivity of its workers increases thanks to high compensations, which are seen as incentives to work harder and when lobbying and cronyism are kept at bay. Luck plays a role too, but these three ingredients are necessary, even if not sufficient, for an economy to grow and thus be “cheerful.”

Originality/value

These three aspects – expansion of market, liberal compensation of workers and lobbying – especially combined, have often been underestimated in Smith’s understanding of the possible sources of economic growth.

Details

EconomiA, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 1517-7580

Keywords

Access

Only content I have access to

Year

Last 3 months (103)

Content type

1 – 10 of 103