Search results
1 – 10 of 345Abigail A. Allen and Kristina N. Randall
Empirical validation of educational technology is critical for best practice, particularly when courses are delivered online. This study aims to investigate the predictive…
Abstract
Purpose
Empirical validation of educational technology is critical for best practice, particularly when courses are delivered online. This study aims to investigate the predictive relationship of usage behaviors on perceptions of 30 preservice special education teachers while reading in an online social annotation reading tool.
Design/methodology/approach
In this single-group quasi-experimental study, participants completed two readings in Perusall, once individually and once in small groups, then took a researcher-created survey after each reading. Descriptive data and paired sample t-tests were calculated. Predictive relationships between usage behaviors and survey results were analyzed with linear regression models.
Findings
Participants thought Perusall was useful for their learning and easier to use in small groups and that guided reading prompts were helpful. Usage behaviors did not significantly account for participant beliefs about Perusall. Instructors may wish to use guided reading prompts and small groups to maximize student learning and engagement.
Originality/value
This study addresses gaps in the literature (Suhre et al., 2019; Sun et al., 2023) by following one group of students over two semesters, using a commercially available tool, measuring actual usage behaviors and not solely student perceptions and analyzing instructor perceptions of the tool. The authors contribute further evidence that group-constructed knowledge is valuable for undergraduate learning (Kalir et al., 2020b). The authors also provide data-based suggestions for the use of social annotation tools that maximize student learning and engagement.
Details
Keywords
Weimin Zhai, Zhongzhen Lin and Biwen Xu
With the rapid development of technology, 360° panorama on mobile as a very convenient way to present virtual reality has brought a new shopping experience to consumers. Usually…
Abstract
Purpose
With the rapid development of technology, 360° panorama on mobile as a very convenient way to present virtual reality has brought a new shopping experience to consumers. Usually, consumers get product information through virtual annotations in 360° panorama and then make a series of shopping behaviors. The visual design of virtual annotation significantly influences users' online visual search for product information. This study aims to investigate the influence of the visual design of virtual annotation on consumers' shopping experience in the online shopping interface of 360° panorama.
Design/methodology/approach
A 2 × 3 between-subject design was planned to help explore whether different display model of annotation (i.e. negative polarity and positive polarity) and different background transparency of annotation (i.e. 0% transparency, 25% transparency and 50% transparency) may affect users' task performance and their subjective evaluations.
Findings
(1) Virtual annotations with different background transparency affect user performance, and transparency has better visual search performance. (2) Virtual annotation background display mode may affect the user operation performance; the positive polarity of the virtual annotation is more convenient for the users' visual searching for product information. (3) When the annotation background transparency is opaque or semi-transparent, the negative polarity display is more favorable to the users' visual search. However, this situation is reversed when the annotation background transparency is 25%. (4) Participants preferred the presentation of positive polarity virtual annotations. (5) Regarding the degree of willingness to use and ease of understanding, participants preferred the negative polarity display for 0% background transparency or 50% background transparency. However, the opposite result was obtained for 25% background transparency.
Originality/value
The findings generated from the research can be a good reference for the development of virtual annotation visual design for mobile shopping applications.
Highlights
Virtual annotation background transparency and background display mode are two essential attributes of 360° panoramas.
This study examined how virtual annotation background transparency and background display mode influence user performance and experience.
It is recommended to use a translucent or opaque annotation background with a negative polarity display.
Virtual annotation presentation with 25% background transparency facilitates consumer searching and comparison of product information.
Users prefer a positive polarity annotation display.
Virtual annotation background transparency and background display mode are two essential attributes of 360° panoramas.
This study examined how virtual annotation background transparency and background display mode influence user performance and experience.
It is recommended to use a translucent or opaque annotation background with a negative polarity display.
Virtual annotation presentation with 25% background transparency facilitates consumer searching and comparison of product information.
Users prefer a positive polarity annotation display.
Details
Keywords
Giovanna Aracri, Antonietta Folino and Stefano Silvestri
The purpose of this paper is to propose a methodology for the enrichment and tailoring of a knowledge organization system (KOS), in order to support the information extraction…
Abstract
Purpose
The purpose of this paper is to propose a methodology for the enrichment and tailoring of a knowledge organization system (KOS), in order to support the information extraction (IE) task for the analysis of documents in the tourism domain. In particular, the KOS is used to develop a named entity recognition (NER) system.
Design/methodology/approach
A method to improve and customize an available thesaurus by leveraging documents related to the tourism in Italy is firstly presented. Then, the obtained thesaurus is used to create an annotated NER corpus, exploiting both distant supervision, deep learning and a light human supervision.
Findings
The study shows that a customized KOS can effectively support IE tasks when applied to documents belonging to the same domains and types used for its construction. Moreover, it is very useful to support and ease the annotation task using the proposed methodology, allowing to annotate a corpus with a fraction of the effort required for a manual annotation.
Originality/value
The paper explores an alternative use of a KOS, proposing an innovative NER corpus annotation methodology. Moreover, the KOS and the annotated NER data set will be made publicly available.
Details
Keywords
Lino Gonzalez-Garcia, Gema González-Carreño, Ana María Rivas Machota and Juan Padilla Fernández-Vega
Knowledge graphs (KGs) are structured knowledge bases that represent real-world entities and are used in a variety of applications. Many of them are created and curated from a…
Abstract
Purpose
Knowledge graphs (KGs) are structured knowledge bases that represent real-world entities and are used in a variety of applications. Many of them are created and curated from a combination of automated and manual processes. Microdata embedded in Web pages for purposes of facilitating indexing and search engine optimization are a potential source to augment KGs under some assumptions of complementarity and quality that have not been thoroughly explored to date. In that direction, this paper aims to report results on a study that evaluates the potential of using microdata extracted from the Web to augment the large, open and manually curated Wikidata KG for the domain of touristic information. As large corpora of Web text is currently being leveraged via large language models (LLMs), these are used to compare the effectiveness of the microdata enhancement method.
Design/methodology/approach
The Schema.org taxonomy was used as the source to determine the annotation types to be collected. Here, the authors focused on tourism-related pages as a case study, selecting the relevant Schema.org concepts as point of departure. The large CommonCrawl resource was used to select those annotations from a large recent sample of the World Wide Web. The extracted annotations were processed and matched with Wikidata to estimate the degree to which microdata produced for SEO might become a valuable resource to complement KGs or vice versa. The Web pages themselves can also serve as a context to produce additional metadata elements using them as context in pipelines of an existing LLMs. That way, both the annotations and the contents itself can be used as sources.
Findings
The samples extracted revealed a concentration of metadata annotations in only a few of the relevant Schema.org attributes and also revealed the possible influence of authoring tools in a significant fraction of microdata produced. The analysis of the overlapping of attributes in the sample with those of Wikidata showed the potential of the technique, limited by the disbalance of the presence of attributes. The combination of those with the use of LLMs to produce additional annotations demonstrates the feasibility of the approach in the population of existing Wikidata locations. However, in both cases, the effectiveness appears to be lower in the cases of less content in the KG, which are arguably the most relevant when considering the scenario of an automated population approach.
Originality/value
The research reports novel empirical findings on the way touristic annotations with a SEO orientation are being produced in the wild and provides an assessment of their potential to complement KGs, or reuse information from those graphs. It also provides insights on the potential of using LLMs for the task.
Details
Keywords
Florian Rupp, Benjamin Schnabel and Kai Eckert
The purpose of this work is to explore the new possibilities enabled by the recent introduction of RDF-star, an extension that allows for statements about statements within the…
Abstract
Purpose
The purpose of this work is to explore the new possibilities enabled by the recent introduction of RDF-star, an extension that allows for statements about statements within the Resource Description Framework (RDF). Alongside Named Graphs, this approach offers opportunities to leverage a meta-level for data modeling and data applications.
Design/methodology/approach
In this extended paper, the authors build onto three modeling use cases published in a previous paper: (1) provide provenance information, (2) maintain backwards compatibility for existing models, and (3) reduce the complexity of a data model. The authors present two scenarios where they implement the use of the meta-level to extend a data model with meta-information.
Findings
The authors present three abstract patterns for actively using the meta-level in data modeling. The authors showcase the implementation of the meta-level through two scenarios from our research project: (1) the authors introduce a workflow for triple annotation that uses the meta-level to enable users to comment on individual statements, such as for reporting errors or adding supplementary information. (2) The authors demonstrate how adding meta-information to a data model can accommodate highly specialized data while maintaining the simplicity of the underlying model.
Practical implications
Through the formulation of data modeling patterns with RDF-star and the demonstration of their application in two scenarios, the authors advocate for data modelers to embrace the meta-level.
Originality/value
With RDF-star being a very new extension to RDF, to the best of the authors’ knowledge, they are among the first to relate it to other meta-level approaches and demonstrate its application in real-world scenarios.
Details
Keywords
This paper aims to delve into the complexities of terminology mapping and annotation, particularly within the context of the COVID-19 pandemic. It underscores the criticality of…
Abstract
Purpose
This paper aims to delve into the complexities of terminology mapping and annotation, particularly within the context of the COVID-19 pandemic. It underscores the criticality of harmonizing clinical knowledge organization systems (KOS) through a cohesive clinical knowledge representation approach. Central to the study is the pursuit of a novel method for integrating emerging COVID-19-specific vocabularies with existing systems, focusing on simplicity, adaptability and minimal human intervention.
Design/methodology/approach
A design science research (DSR) methodology is used to guide the development of a terminology mapping and annotation workflow. The KNIME data analytics platform is used to implement and test the mapping and annotation techniques, leveraging its powerful data processing and analytics capabilities. The study incorporates specific ontologies relevant to COVID-19, evaluates mapping accuracy and tests performance against a gold standard.
Findings
The study demonstrates the potential of the developed solution to map and annotate specific KOS efficiently. This method effectively addresses the limitations of previous approaches by providing a user-friendly interface and streamlined process that minimizes the need for human intervention. Additionally, the paper proposes a reusable workflow tool that can streamline the mapping process. It offers insights into semantic interoperability issues in health care as well as recommendations for work in this space.
Originality/value
The originality of this study lies in its use of the KNIME data analytics platform to address the unique challenges posed by the COVID-19 pandemic in terminology mapping and annotation. The novel workflow developed in this study addresses known challenges by combining mapping and annotation processes specifically for COVID-19-related vocabularies. The use of DSR methodology and relevant ontologies with the KNIME tool further contribute to the study’s originality, setting it apart from previous research in the terminology mapping and annotation field.
Details
Keywords
Shaodan Sun, Jun Deng and Xugong Qin
This paper aims to amplify the retrieval and utilization of historical newspapers through the application of semantic organization, all from the vantage point of a fine-grained…
Abstract
Purpose
This paper aims to amplify the retrieval and utilization of historical newspapers through the application of semantic organization, all from the vantage point of a fine-grained knowledge element perspective. This endeavor seeks to unlock the latent value embedded within newspaper contents while simultaneously furnishing invaluable guidance within methodological paradigms for research in the humanities domain.
Design/methodology/approach
According to the semantic organization process and knowledge element concept, this study proposes a holistic framework, including four pivotal stages: knowledge element description, extraction, association and application. Initially, a semantic description model dedicated to knowledge elements is devised. Subsequently, harnessing the advanced deep learning techniques, the study delves into the realm of entity recognition and relationship extraction. These techniques are instrumental in identifying entities within the historical newspaper contents and capturing the interdependencies that exist among them. Finally, an online platform based on Flask is developed to enable the recognition of entities and relationships within historical newspapers.
Findings
This article utilized the Shengjing Times·Changchun Compilation as the datasets for describing, extracting, associating and applying newspapers contents. Regarding knowledge element extraction, the BERT + BS consistently outperforms Bi-LSTM, CRF++ and even BERT in terms of Recall and F1 scores, making it a favorable choice for entity recognition in this context. Particularly noteworthy is the Bi-LSTM-Pro model, which stands out with the highest scores across all metrics, notably achieving an exceptional F1 score in knowledge element relationship recognition.
Originality/value
Historical newspapers transcend their status as mere artifacts, evolving into invaluable reservoirs safeguarding the societal and historical memory. Through semantic organization from a fine-grained knowledge element perspective, it can facilitate semantic retrieval, semantic association, information visualization and knowledge discovery services for historical newspapers. In practice, it can empower researchers to unearth profound insights within the historical and cultural context, broadening the landscape of digital humanities research and practical applications.
Details
Keywords
Mrinalini Luthra, Konstantin Todorov, Charles Jeurgens and Giovanni Colavizza
This paper aims to expand the scope and mitigate the biases of extant archival indexes.
Abstract
Purpose
This paper aims to expand the scope and mitigate the biases of extant archival indexes.
Design/methodology/approach
The authors use automatic entity recognition on the archives of the Dutch East India Company to extract mentions of underrepresented people.
Findings
The authors release an annotated corpus and baselines for a shared task and show that the proposed goal is feasible.
Originality/value
Colonial archives are increasingly a focus of attention for historians and the public, broadening access to them is a pressing need for archives.
Details
Keywords
Chunxiu Qin, Yulong Wang, XuBu Ma, Yaxi Liu and Jin Zhang
To address the shortcomings of existing academic user information needs identification methods, such as low efficiency and high subjectivity, this study aims to propose an…
Abstract
Purpose
To address the shortcomings of existing academic user information needs identification methods, such as low efficiency and high subjectivity, this study aims to propose an automated method of identifying online academic user information needs.
Design/methodology/approach
This study’s method consists of two main parts: the first is the automatic classification of academic user information needs based on the bidirectional encoder representations from transformers (BERT) model. The second is the key content extraction of academic user information needs based on the improved MDERank key phrase extraction (KPE) algorithm. Finally, the applicability and effectiveness of the method are verified by an example of identifying the information needs of academic users in the field of materials science.
Findings
Experimental results show that the BERT-based information needs classification model achieved the highest weighted average F1 score of 91.61%. The improved MDERank KPE algorithm achieves the highest F1 score of 61%. The empirical analysis results reveal that the information needs of the categories “methods,” “experimental phenomena” and “experimental materials” are relatively high in the materials science field.
Originality/value
This study provides a solution for automated identification of academic user information needs. It helps online academic resource platforms to better understand their users’ information needs, which in turn facilitates the platform’s academic resource organization and services.
Details
Keywords
Yohanes Sigit Purnomo W.P., Yogan Jaya Kumar and Nur Zareen Zulkarnain
By far, the corpus for the quotation extraction and quotation attribution tasks in Indonesian is still limited in quantity and depth. This study aims to develop an Indonesian…
Abstract
Purpose
By far, the corpus for the quotation extraction and quotation attribution tasks in Indonesian is still limited in quantity and depth. This study aims to develop an Indonesian corpus of public figure statements attributions and a baseline model for attribution extraction, so it will contribute to fostering research in information extraction for the Indonesian language.
Design/methodology/approach
The methodology is divided into corpus development and extraction model development. During corpus development, data were collected and annotated. The development of the extraction model entails feature extraction, the definition of the model architecture, parameter selection and configuration, model training and evaluation, as well as model selection.
Findings
The Indonesian corpus of public figure statements attribution achieved 90.06% agreement level between the annotator and experts and could serve as a gold standard corpus. Furthermore, the baseline model predicted most labels and achieved 82.026% F-score.
Originality/value
To the best of the authors’ knowledge, the resulting corpus is the first corpus for attribution of public figures’ statements in the Indonesian language, which makes it a significant step for research on attribution extraction in the language. The resulting corpus and the baseline model can be used as a benchmark for further research. Other researchers could follow the methods presented in this paper to develop a new corpus and baseline model for other languages.
Details