Search results
1 – 10 of 114The article extends the distinction of semantic from syntactic labour to comprehend all forms of mental labour. It answers a critique from de Fremery and Buckland, which required…
Abstract
Purpose
The article extends the distinction of semantic from syntactic labour to comprehend all forms of mental labour. It answers a critique from de Fremery and Buckland, which required envisaging mental labour as a differentiated spectrum.
Design/methodology/approach
The paper adopts a discursive approach. It first reviews the significance and extensive diffusion of the distinction of semantic from syntactic labour. Second, it integrates semantic and syntactic labour along a vertical dimension within mental labour, indicating analogies in principle with, and differences in application from, the inherited distinction of intellectual from clerical labour. Third, it develops semantic labour to the very highest level, on a consistent principle of differentiation from syntactic labour. Finally, it reintegrates the understanding developed of semantic labour with syntactic labour, confirming that they can fully and informatively occupy mental labour.
Findings
The article further validates the distinction of semantic from syntactic labour. It enables to address Norbert Wiener's classic challenge of appropriately distributing activity between human and computer.
Research limitations/implications
The article transforms work in progress into knowledge for diffusion.
Practical implications
It has practical implications for determining what tasks to delegate to computational technology.
Social implications
The paper has social implications for the understanding of appropriate human and machine computational tasks and our own distinctive humanness.
Originality/value
The paper is highly original. Although based on preceding research, from the late 20th century, it is the first separately published full account of semantic and syntactic labour.
Details
Keywords
Ying Gao, Qiang Zhang, Xiaoran Wang, Yanmei Huang, Fanshuang Meng and Wan Tao
Currently, the Tang tomb mural cultural relic resources are presented in a multi-source and heterogeneous manner, with a lack of effective organization and sharing between…
Abstract
Purpose
Currently, the Tang tomb mural cultural relic resources are presented in a multi-source and heterogeneous manner, with a lack of effective organization and sharing between resources. Therefore, this study aims to propose a multidimensional knowledge discovery solution for Tang tomb mural cultural relic resources.
Design/methodology/approach
Taking the Tang tomb murals collected by the Shaanxi History Museum as an example, based on clarifying the relevant concepts of Tang tomb mural resources and considering both dynamic and static dimensions, a top-down approach was adopted to first construct an ontology model of Tang tomb mural type cultural relics resources. Then, the actual case data was imported into the Neo4J graph database according to the defined pattern hierarchy to complete the static organization of knowledge, and presented in a multimodal form in knowledge reasoning and retrieval. In addition, geographic information system (GIS) technology is used to dynamically display the spatiotemporal distribution of Tang tomb mural resources, and the distribution trend is analysed from a digital humanistic perspective.
Findings
The multi-dimensional knowledge discovery of Tang tomb mural cultural relics resources can help establish the correlation and spatiotemporal relationship between resources, providing support for semantic retrieval and navigation, knowledge discovery and visualization and so on.
Originality/value
This study takes the murals in the collection of the Shaanxi History Museum as an example, revealing potential knowledge associations in a static and intelligent way, achieving knowledge discovery and management of Tang tomb murals, and dynamically presents the spatial distribution of Tang tomb murals through GIS technology, meeting the knowledge presentation needs of different users and opening up new ideas for the study of Tang tomb murals.
Details
Keywords
Despite ongoing research into archival metadata standards, digital archives are unable to effectively represent records in their appropriate contexts. This study aims to propose a…
Abstract
Purpose
Despite ongoing research into archival metadata standards, digital archives are unable to effectively represent records in their appropriate contexts. This study aims to propose a knowledge graph that depicts the diverse relationships between heterogeneous digital archive entities.
Design/methodology/approach
This study introduces and describes a method for applying knowledge graphs to digital archives in a step-by-step manner. It examines archival metadata standards, such as Records in Context Ontology (RiC-O), for characterising digital records; explains the process of data refinement, enrichment and reconciliation with examples; and demonstrates the use of knowledge graphs constructed using semantic queries.
Findings
This study introduced the 97imf.kr archive as a knowledge graph, enabling meaningful exploration of relationships within the archive’s records. This approach facilitated comprehensive record descriptions about different record entities. Applying archival ontologies with general-purpose vocabularies to digital records was advised to enhance metadata coherence and semantic search.
Originality/value
Most digital archives serviced in Korea are limited in the proper use of archival metadata standards. The contribution of this study is to propose a practical application of knowledge graph technology for linking and exploring digital records. This study details the process of collecting raw data on archives, data preprocessing and data enrichment, and demonstrates how to build a knowledge graph connected to external data. In particular, the knowledge graph of RiC-O vocabulary, Wikidata and Schema.org vocabulary and the semantic query using it can be applied to supplement keyword search in conventional digital archives.
Details
Keywords
Recent archiving and curatorial practices took advantage of the advancement in digital technologies, creating immersive and interactive experiences to emphasize the plurality of…
Abstract
Purpose
Recent archiving and curatorial practices took advantage of the advancement in digital technologies, creating immersive and interactive experiences to emphasize the plurality of memory materials, encourage personalized sense-making and extract, manage and share the ever-growing surrounding knowledge. Audiovisual (AV) content, with its growing importance and popularity, is less explored on that end than texts and images. This paper examines the trend of datafication in AV archives and answers the critical question, “What to extract from AV materials and why?”.
Design/methodology/approach
This study roots in a comprehensive state-of-the-art review of digital methods and curatorial practices in AV archives. The thinking model for mapping AV archive data to purposes is based on pre-existing models for understanding multimedia content and metadata standards.
Findings
The thinking model connects AV content descriptors (data perspective) and purposes (curatorial perspective) and provides a theoretical map of how information extracted from AV archives should be fused and embedded for memory institutions. The model is constructed by looking into the three broad dimensions of audiovisual content – archival, affective and aesthetic, social and historical.
Originality/value
This paper contributes uniquely to the intersection of computational archives, audiovisual content and public sense-making experiences. It provides updates and insights to work towards datafied AV archives and cope with the increasing needs in the sense-making end using AV archives.
Details
Keywords
Luís Jacques de Sousa, João Poças Martins, Luís Sanhudo and João Santos Baptista
This study aims to review recent advances towards the implementation of ANN and NLP applications during the budgeting phase of the construction process. During this phase…
Abstract
Purpose
This study aims to review recent advances towards the implementation of ANN and NLP applications during the budgeting phase of the construction process. During this phase, construction companies must assess the scope of each task and map the client’s expectations to an internal database of tasks, resources and costs. Quantity surveyors carry out this assessment manually with little to no computer aid, within very austere time constraints, even though these results determine the company’s bid quality and are contractually binding.
Design/methodology/approach
This paper seeks to compile applications of machine learning (ML) and natural language processing in the architectural engineering and construction sector to find which methodologies can assist this assessment. The paper carries out a systematic literature review, following the preferred reporting items for systematic reviews and meta-analyses guidelines, to survey the main scientific contributions within the topic of text classification (TC) for budgeting in construction.
Findings
This work concludes that it is necessary to develop data sets that represent the variety of tasks in construction, achieve higher accuracy algorithms, widen the scope of their application and reduce the need for expert validation of the results. Although full automation is not within reach in the short term, TC algorithms can provide helpful support tools.
Originality/value
Given the increasing interest in ML for construction and recent developments, the findings disclosed in this paper contribute to the body of knowledge, provide a more automated perspective on budgeting in construction and break ground for further implementation of text-based ML in budgeting for construction.
Details
Keywords
Valeria Noguti and David S. Waller
This research investigates how consumers who are most active on Facebook during the day vs in the evening differ, differ in their ad consumption, and how advertising effects vary…
Abstract
Purpose
This research investigates how consumers who are most active on Facebook during the day vs in the evening differ, differ in their ad consumption, and how advertising effects vary as a function of a key moderator: gender.
Design/methodology/approach
Using a survey of 281 people, the research identifies Facebook users who are more intensely using mobile social media during the day versus in the evening, and measures five Facebook mobile advertising outcomes: brand and product recall, clicking on ads, acting on ads and purchases.
Findings
The results show that women who are using social media more intensely during the day are more likely to use Facebook to seek information, hence, Facebook mobile ads tend to be more effective for these users compared to those in the evening.
Research limitations/implications
This contributes to the literature by analyzing how the time of day affects social media behavior in relation to mobile advertising effectiveness, and broadening the scope of mobile advertising effectiveness research from other than just clicks on ads to include measures like brand and product recall.
Practical implications
By analyzing the effectiveness of mobile advertising on social media as a function of the time of day, advertisers can be more targeted in their media buys, and so better use their social media budgets, i.e. advertising is more effective for women who use social media (Facebook) more intensely during the day than for those who use social media more intensely in the evening as the former tend to seek more information than the latter.
Social implications
This research extends media ecology theory by drawing on circadian rhythm research to provide a first demonstration of how the time of day relates to different uses of mobile social media, which in turn relate to social media mobile advertising consumption.
Originality/value
While research on social media advertising has been steadily increasing, little has been explored on how users consume ads when they engage with social media at different periods along the day. This paper extends media ecology theory by investigating time of day, drawing on the circadian rhythm literature, and how it relates to social media usage.
Details
Keywords
Ambica Ghai, Pradeep Kumar and Samrat Gupta
Web users rely heavily on online content make decisions without assessing the veracity of the content. The online content comprising text, image, video or audio may be tampered…
Abstract
Purpose
Web users rely heavily on online content make decisions without assessing the veracity of the content. The online content comprising text, image, video or audio may be tampered with to influence public opinion. Since the consumers of online information (misinformation) tend to trust the content when the image(s) supplement the text, image manipulation software is increasingly being used to forge the images. To address the crucial problem of image manipulation, this study focusses on developing a deep-learning-based image forgery detection framework.
Design/methodology/approach
The proposed deep-learning-based framework aims to detect images forged using copy-move and splicing techniques. The image transformation technique aids the identification of relevant features for the network to train effectively. After that, the pre-trained customized convolutional neural network is used to train on the public benchmark datasets, and the performance is evaluated on the test dataset using various parameters.
Findings
The comparative analysis of image transformation techniques and experiments conducted on benchmark datasets from a variety of socio-cultural domains establishes the effectiveness and viability of the proposed framework. These findings affirm the potential applicability of proposed framework in real-time image forgery detection.
Research limitations/implications
This study bears implications for several important aspects of research on image forgery detection. First this research adds to recent discussion on feature extraction and learning for image forgery detection. While prior research on image forgery detection, hand-crafted the features, the proposed solution contributes to stream of literature that automatically learns the features and classify the images. Second, this research contributes to ongoing effort in curtailing the spread of misinformation using images. The extant literature on spread of misinformation has prominently focussed on textual data shared over social media platforms. The study addresses the call for greater emphasis on the development of robust image transformation techniques.
Practical implications
This study carries important practical implications for various domains such as forensic sciences, media and journalism where image data is increasingly being used to make inferences. The integration of image forgery detection tools can be helpful in determining the credibility of the article or post before it is shared over the Internet. The content shared over the Internet by the users has become an important component of news reporting. The framework proposed in this paper can be further extended and trained on more annotated real-world data so as to function as a tool for fact-checkers.
Social implications
In the current scenario wherein most of the image forgery detection studies attempt to assess whether the image is real or forged in an offline mode, it is crucial to identify any trending or potential forged image as early as possible. By learning from historical data, the proposed framework can aid in early prediction of forged images to detect the newly emerging forged images even before they occur. In summary, the proposed framework has a potential to mitigate physical spreading and psychological impact of forged images on social media.
Originality/value
This study focusses on copy-move and splicing techniques while integrating transfer learning concepts to classify forged images with high accuracy. The synergistic use of hitherto little explored image transformation techniques and customized convolutional neural network helps design a robust image forgery detection framework. Experiments and findings establish that the proposed framework accurately classifies forged images, thus mitigating the negative socio-cultural spread of misinformation.
Details
Keywords
Miquel Centelles and Núria Ferran-Ferrer
Develop a comprehensive framework for assessing the knowledge organization systems (KOSs), including the taxonomy of Wikipedia and the ontologies of Wikidata, with a specific…
Abstract
Purpose
Develop a comprehensive framework for assessing the knowledge organization systems (KOSs), including the taxonomy of Wikipedia and the ontologies of Wikidata, with a specific focus on enhancing management and retrieval with a gender nonbinary perspective.
Design/methodology/approach
This study employs heuristic and inspection methods to assess Wikipedia’s KOS, ensuring compliance with international standards. It evaluates the efficiency of retrieving non-masculine gender-related articles using the Catalan Wikipedian category scheme, identifying limitations. Additionally, a novel assessment of Wikidata ontologies examines their structure and coverage of gender-related properties, comparing them to Wikipedia’s taxonomy for advantages and enhancements.
Findings
This study evaluates Wikipedia’s taxonomy and Wikidata’s ontologies, establishing evaluation criteria for gender-based categorization and exploring their structural effectiveness. The evaluation process suggests that Wikidata ontologies may offer a viable solution to address Wikipedia’s categorization challenges.
Originality/value
The assessment of Wikipedia categories (taxonomy) based on KOS standards leads to the conclusion that there is ample room for improvement, not only in matters concerning gender identity but also in the overall KOS to enhance search and retrieval for users. These findings bear relevance for the design of tools to support information retrieval on knowledge-rich websites, as they assist users in exploring topics and concepts.
Details
Keywords
Yaolin Zhou, Zhaoyang Zhang, Xiaoyu Wang, Quanzheng Sheng and Rongying Zhao
The digitalization of archival management has rapidly developed with the maturation of digital technology. With data's exponential growth, archival resources have transitioned…
Abstract
Purpose
The digitalization of archival management has rapidly developed with the maturation of digital technology. With data's exponential growth, archival resources have transitioned from single modalities, such as text, images, audio and video, to integrated multimodal forms. This paper identifies key trends, gaps and areas of focus in the field. Furthermore, it proposes a theoretical organizational framework based on deep learning to address the challenges of managing archives in the era of big data.
Design/methodology/approach
Via a comprehensive systematic literature review, the authors investigate the field of multimodal archive resource organization and the application of deep learning techniques in archive organization. A systematic search and filtering process is conducted to identify relevant articles, which are then summarized, discussed and analyzed to provide a comprehensive understanding of existing literature.
Findings
The authors' findings reveal that most research on multimodal archive resources predominantly focuses on aspects related to storage, management and retrieval. Furthermore, the utilization of deep learning techniques in image archive retrieval is increasing, highlighting their potential for enhancing image archive organization practices; however, practical research and implementation remain scarce. The review also underscores gaps in the literature, emphasizing the need for more practical case studies and the application of theoretical concepts in real-world scenarios. In response to these insights, the authors' study proposes an innovative deep learning-based organizational framework. This proposed framework is designed to navigate the complexities inherent in managing multimodal archive resources, representing a significant stride toward more efficient and effective archival practices.
Originality/value
This study comprehensively reviews the existing literature on multimodal archive resources organization. Additionally, a theoretical organizational framework based on deep learning is proposed, offering a novel perspective and solution for further advancements in the field. These insights contribute theoretically and practically, providing valuable knowledge for researchers, practitioners and archivists involved in organizing multimodal archive resources.
Details
Keywords
Rongen Yan, Depeng Dang, Hu Gao, Yan Wu and Wenhui Yu
Question answering (QA) answers the questions asked by people in the form of natural language. In the QA, due to the subjectivity of users, the questions they query have different…
Abstract
Purpose
Question answering (QA) answers the questions asked by people in the form of natural language. In the QA, due to the subjectivity of users, the questions they query have different expressions, which increases the difficulty of text retrieval. Therefore, the purpose of this paper is to explore new query rewriting method for QA that integrates multiple related questions (RQs) to form an optimal question. Moreover, it is important to generate a new dataset of the original query (OQ) with multiple RQs.
Design/methodology/approach
This study collects a new dataset SQuAD_extend by crawling the QA community and uses word-graph to model the collected OQs. Next, Beam search finds the best path to get the best question. To deeply represent the features of the question, pretrained model BERT is used to model sentences.
Findings
The experimental results show three outstanding findings. (1) The quality of the answers is better after adding the RQs of the OQs. (2) The word-graph that is used to model the problem and choose the optimal path is conducive to finding the best question. (3) Finally, BERT can deeply characterize the semantics of the exact problem.
Originality/value
The proposed method can use word-graph to construct multiple questions and select the optimal path for rewriting the question, and the quality of answers is better than the baseline. In practice, the research results can help guide users to clarify their query intentions and finally achieve the best answer.
Details