Search results
1 – 10 of 504Federico Pianzola, Maurizio Toccu and Marco Viviani
The purpose of this article is to explore how participants with different motivations (educational or leisure), familiarity with the medium (newbies and active Twitter users), and…
Abstract
Purpose
The purpose of this article is to explore how participants with different motivations (educational or leisure), familiarity with the medium (newbies and active Twitter users), and participating instructions respond to a highly structured digital social reading (DSR) activity in terms of intensity of engagement and social interaction.
Design/methodology/approach
A case study involving students and teachers of 211 Italian high school classes and 242 other Twitter users, who generated a total of 18,962 tweets commenting on a literary text, was conducted. The authors performed both a quantitative analysis focusing on the number of tweets/retweets generated by participants and a network analysis exploiting the study of interactions between them. The authors also classified the tweets with respect to their originality, by using both automated text reuse detection approaches and manual categorization, to identify quotations, paraphrases and other forms of reader response.
Findings
The decoupling (both in space and time) of text read (in class) and comments (on Twitter) likely led users to mainly share text excerpts rather than original personal reactions to the story. There was almost no interaction outside the classroom, neither with other students nor with generic Twitter users, characterizing this project as a shared experience of “audiencing” a media event. The intensity of social interactions is more related to the breadth of the audience reached by the user-generated content and to a strong retweeting activity. In general, better familiarity with digital (social) media is related to an increase in the level of social interaction.
Originality/value
The authors analyzed one of the largest educational social reading projects ever realized, contributing to the still scarce empirical research about DSR. The authors employed state-of-the-art automated text reuse detection to classify reader response.
Details
Keywords
Flavio M. Cecchini, Greta H. Franzini and Marco C. Passarotti
The presence of Latin in heavy metal music ranges from full texts, intros, song and album titles to band names, pseudonyms, and literary quotations. This chapter sheds light on…
Abstract
The presence of Latin in heavy metal music ranges from full texts, intros, song and album titles to band names, pseudonyms, and literary quotations. This chapter sheds light on heavy metal's fascination with the history and ‘arcane’ sound of Latin, and investigates its patterns of use in lyrics with the help of Natural Language Processing tools and digitally-available linguistic resources. First, the authors collected a corpus of lyrics containing differing amounts of Latin and enhanced it with descriptive metadata. Next, the authors calculated the richness of the vocabulary and the distribution of content words. The authors processed the corpus with a morphological analyser and performed both a manual and a computational search for intertextuality, including allusions, paraphrase and verbatim quotations of literary sources. The authors show that, despite it being a dead language, Latin is very frequently used in metal. Its historical status appears to fascinate bands and lends itself well to those religious, epic and mysterious themes so characteristic of the heavy metal world. The widespread use of Latin in metal lyrics, however, sees many bands simply reusing Latin texts – mostly from the Bible – or even misspelling literary quotations.
Details
Keywords
Martyn Harris, Mark Levene, Dell Zhang and Dan Levene
The purpose of this paper is to present a language-agnostic approach to facilitate the discovery of “parallel passages” stored in historic and cultural heritage digital archives.
Abstract
Purpose
The purpose of this paper is to present a language-agnostic approach to facilitate the discovery of “parallel passages” stored in historic and cultural heritage digital archives.
Design/methodology/approach
The authors explore a novel, and relatively simple approach, using a character-based statistical language model combined with a tailored version of the Basic Local Alignment Tool to extract exact and approximate string patterns shared between groups of documents.
Findings
The approach is applicable to a wide range of languages, and compensates for variability in the text of the documents as a result of differences in dialect, authorship, language change over time and errors due to inaccurate transcriptions and optical character recognition errors as a result of the digitisation process.
Research limitations/implications
A number of case studies demonstrate that the approach is practical and generalisable to a wide range of archives with documents in different languages, domains and of varying quality.
Practical implications
The approach described can be applied to any digital archive of modern and contemporary texts. This makes the approach applicable to digital archives recording historic texts, but also those composed of more recent news articles, for example.
Social implications
The analysis of “parallel passages” enables researchers to quantify the presence and extent of text-reuse in a collection of documents, which can provide useful data on author style, text genres and cultural contexts.
Originality/value
The approach is novel and addresses a need by humanities researchers for tools that can identify similar documents and local similarities represented by shared text sequences in a potentially vast large archive of documents. As far as the authors are aware, there are no tools currently exist that provide the same level of tolerance to the language of the documents.
Details
Keywords
Upcycling is conceptualised as a digital historical research practice aimed at increasing the scientific value of historical data collections produced in print or in electronic…
Abstract
Purpose
Upcycling is conceptualised as a digital historical research practice aimed at increasing the scientific value of historical data collections produced in print or in electronic form between the eighteenth and the late twentieth centuries. The concept of upcycling facilitates data rescue and reuse as well as the study of information creation processes deployed by previous generations of researchers.
Design/methodology/approach
Based on a selection of two historical reference works and two legacy collections, an upcycling workflow consisting of three parts (input, processing and documentation and output) is developed. The workflow facilitates the study of historical information creation processes based on paradata analysis and targets the cognitive processes that precede and accompany the creation of historical data collections.
Findings
The proposed upcycling workflow furthers the understanding of computational methods and their role in historical research. Through its focus on the information creation processes that precede and accompany historical research, the upcycling workflow contributes to historical data criticism and digital hermeneutics.
Originality/value
Many historical data collections produced between the eighteenth and the late twentieth century do not comply with the principles of FAIR data. The paper argues that ignoring the work of previous generations of researchers is not an option, because it would make current research practices more vulnerable and would result in losing access to the experiences and knowledge accumulated by previous generations of scientists. The proposed upcycling workflow takes historical data collections seriously and makes them available for future generations of researchers.
Details
Keywords
Zhong Wang, Hongbo Sun and Baode Fan
The era of crowd network is coming and the research of its steady-state is of great importance. This paper aims to establish a crowd network simulation platform and maintaining…
Abstract
Purpose
The era of crowd network is coming and the research of its steady-state is of great importance. This paper aims to establish a crowd network simulation platform and maintaining the relative stability of multi-source dissemination systems.
Design/methodology/approach
With this simulation platform, this paper studies the characteristics of “emergence,” monitors the state of the system and according to the fixed point judges the system of steady-state conditions, then uses three control conditions and control methods to control the system status to acquire general rules for maintain the stability of multi-source information dissemination systems.
Findings
This paper establishes a novel steady-state maintenance simulation framework. It will be useful for achieving controllability to the evolution of information dissemination and simulating the effectiveness of control conditions for multi-source information dissemination systems.
Originality/value
This paper will help researchers to solve problems of public opinion control in multi-source information dissemination in crowd network.
Details
Keywords
Shailesh Khapre, Prabhishek Singh, Achyut Shankar, Soumya Ranjan Nayak and Manoj Diwakar
This paper aims to use the concept of machine learning to enable people and machines to interact more certainly to extend and expand human expertise and cognition.
Abstract
Purpose
This paper aims to use the concept of machine learning to enable people and machines to interact more certainly to extend and expand human expertise and cognition.
Design/methodology/approach
Intelligent code reuse recommendations based on code big data analysis, mining and learning can effectively improve the efficiency and quality of software reuse, including common code units in a specific field and common code units that are not related to the field.
Findings
Focusing on the topic of context-based intelligent code reuse recommendation, this paper expounds the research work in two aspects mainly in practical applications of smart decision support and cognitive adaptive systems: code reuse recommendation based on template mining and code reuse recommendation based on deep learning.
Originality/value
On this basis, the future development direction of intelligent code reuse recommendation based on context has prospected.
Details
Keywords
Fatima Zohra Ennaji, Abdelaziz El Fazziki, Hasna El Alaoui El Abdallaoui, Djamal Benslimane and Mohamed Sadgal
The purpose of this paper is to bring together the textual and multimedia opinions, since the use of social data has become the new trend that enables to gather the product…
Abstract
Purpose
The purpose of this paper is to bring together the textual and multimedia opinions, since the use of social data has become the new trend that enables to gather the product reputation traded in social media. Integrating a product reputation process into the companies' strategy will bring several benefits such as helping in decision-making regarding the current and the new generation of the product by understanding the customers’ needs. However, image-centric sentiment analysis has received much less attention than text-based sentiment detection.
Design/methodology/approach
In this work, the authors propose a multimedia content-based product reputation framework that helps in detecting opinions from social media. Thus, in this case, the analysis of a certain publication is made by combining their textual and multimedia parts.
Findings
To test the effectiveness of the proposed framework, a case study based on YouTube videos has been established, as it brings together the image, the audio and the video processing at the same time.
Originality/value
The key novelty is the implication of multimedia content in addition of the textual one with the goal of gathering opinions about a certain product. The multimedia analysis brings together facial sentiment detection, printed text analysis, opinion detection from speeches and textual opinion analysis.
Details
Keywords
Ying Tao Chai and Ting-Kwei Wang
Defects in concrete surfaces are inevitably recurring during construction, which needs to be checked and accepted during construction and completion. Traditional manual inspection…
Abstract
Purpose
Defects in concrete surfaces are inevitably recurring during construction, which needs to be checked and accepted during construction and completion. Traditional manual inspection of surface defects requires inspectors to judge, evaluate and make decisions, which requires sufficient experience and is time-consuming and labor-intensive, and the expertise cannot be effectively preserved and transferred. In addition, the evaluation standards of different inspectors are not identical, which may lead to cause discrepancies in inspection results. Although computer vision can achieve defect recognition, there is a gap between the low-level semantics acquired by computer vision and the high-level semantics that humans understand from images. Therefore, computer vision and ontology are combined to achieve intelligent evaluation and decision-making and to bridge the above gap.
Design/methodology/approach
Combining ontology and computer vision, this paper establishes an evaluation and decision-making framework for concrete surface quality. By establishing concrete surface quality ontology model and defect identification quantification model, ontology reasoning technology is used to realize concrete surface quality evaluation and decision-making.
Findings
Computer vision can identify and quantify defects, obtain low-level image semantics, and ontology can structurally express expert knowledge in the field of defects. This proposed framework can automatically identify and quantify defects, and infer the causes, responsibility, severity and repair methods of defects. Through case analysis of various scenarios, the proposed evaluation and decision-making framework is feasible.
Originality/value
This paper establishes an evaluation and decision-making framework for concrete surface quality, so as to improve the standardization and intelligence of surface defect inspection and potentially provide reusable knowledge for inspecting concrete surface quality. The research results in this paper can be used to detect the concrete surface quality, reduce the subjectivity of evaluation and improve the inspection efficiency. In addition, the proposed framework enriches the application scenarios of ontology and computer vision, and to a certain extent bridges the gap between the image features extracted by computer vision and the information that people obtain from images.
Details
Keywords
Xutang Zhang, Gaoliang Peng, Xin Hou and Ting Zhuang
Fixture design is a complicated task requiring both intensive knowledge and experience. This paper aims to present a computer-aided fixture design (CAFD) system framework based on…
Abstract
Purpose
Fixture design is a complicated task requiring both intensive knowledge and experience. This paper aims to present a computer-aided fixture design (CAFD) system framework based on design reuse technology.
Design/methodology/approach
Fixture design domain ontology is constructed by analyzing fixture design document corpus. A design reuse engine is proposed to realize fixture design knowledge retrieval and fixture model retrieval based on ontology and find fixture design cases similar to fixture design problem, and then use evolutionary methods to modify the retrieved model to meet the design requirements and then generate a new fixture.
Findings
The paper finds that the proposed framework is an efficient tool to improve efficiency of fixture design.
Practical implications
Fixture design existing experience and cases can be used efficiently reused and to advance new fixture design processes.
Originality/value
This paper presents a CAFD system framework capable of carrying out fixture design through full using of the existing fixture design resource and experienced knowledge.
Details