Search results

1 – 10 of over 2000
Article
Publication date: 8 February 2021

Emily Hellmich, Jill Castek, Blaine E. Smith, Rachel Floyd and Wen Wen

Multimodal composing is often romanticized as a flexible approach suitable for all learners. There is a lack of research that critically examines students’ perspectives and the…

Abstract

Purpose

Multimodal composing is often romanticized as a flexible approach suitable for all learners. There is a lack of research that critically examines students’ perspectives and the constraints of multimodal composing across academic contexts. This study aims to address this need by exploring high school learners’ perspectives and experiences enacting multimodal learning in an L2 classroom. More specifically, this study presents key tensions between students’ experiences of multimodal composing and teacher/researchers’ use of multimodal composition in an L2 classroom setting.

Design/methodology/approach

The paper focuses on two multimodal composing projects developed within a design-based implementation research approach and implemented in a high school French class. Multiple data sources were used: observations; interviews; written reflections; and multimodal compositions. Data were analyzed using the critical incident technique (CIT). A critical incident is one that is unplanned and that stimulates reflection on teaching and learning. Methodologically, CIT was enacted through iterative coding to identify critical incidents and collaborative analysis.

Findings

Using illustrative examples from multiple data sources, this study discusses four tensions between students’ experiences of multimodal composing and teacher/researchers’ use of multimodal composition in a classroom setting: the primary audience of student projects, the media leveraged in student projects, expectations of learning in school and the role of a public viewing of student work.

Originality/value

This paper problematizes basic assumptions and benefits of multimodal composing and offers ideas on how to re-center multimodal composing on student voices.

Details

English Teaching: Practice & Critique, vol. 20 no. 2
Type: Research Article
ISSN: 1175-8708

Keywords

Article
Publication date: 29 December 2023

Mousin Omarsaib

This study aims to explore first-year engineering students’ perceptions of the engineering librarian as an instructor in multimodal environments related to Information Literacy…

Abstract

Purpose

This study aims to explore first-year engineering students’ perceptions of the engineering librarian as an instructor in multimodal environments related to Information Literacy (IL) topics, teaching strategy, content evaluation, organising, planning and support.

Design/methodology/approach

A quantitative approach was used through a survey instrument based on an online questionnaire. Questions were adopted and modified from a lecturer evaluation survey. A simple random sampling technique was used to collect data from first-year cohorts of engineering students in 2020 and 2022.

Findings

Respondents perception of the engineering librarian as an instructor in multimodal learning environment was good. Findings revealed students’ learning experiences were aligned with IL instruction even though the environment changed from blended to online. However, an emerging theme that continuously appeared was a lack of access to technology.

Practical implications

These findings may help in developing and strengthening the teaching identity of academic librarians as instructors in multimodal learning environments.

Originality/value

To the best of the author’s knowledge, this study is novel in that it evaluates the teaching abilities of an academic librarian in multimodal environments through the lens of students.

Details

Digital Library Perspectives, vol. 40 no. 1
Type: Research Article
ISSN: 2059-5816

Keywords

Article
Publication date: 30 October 2018

Katina Zammit

This study aims to seek to demonstrate how explicit teaching of SFL metalinguistic and multimodal “grammars” enhanced 8-9-year-old children’s deeper understanding and production…

Abstract

Purpose

This study aims to seek to demonstrate how explicit teaching of SFL metalinguistic and multimodal “grammars” enhanced 8-9-year-old children’s deeper understanding and production of multimodal texts through critique of the construction of mini-documentaries about animals: the information, language of narration, composition of scenes and resources to engage the viewer. It also seeks to demonstrate how a knowledge of metalinguistic and multimodal “grammars” contributes to students achieving both content knowledge and understanding of the resources of semiotic modes.

Design/methodology/approach

A design-based approach was used with the teacher and author working closely together to implement a unit of work on mini-documentaries, including explicit teaching of the metalanguage of information reports, mini-documentary narration (aka script) and multimodal resources deployed to scaffold students’ creating their own mini-documentaries.

Findings

The students’ mini-documentaries demonstrate how knowledge of SFL written and multimodal SFL-informed “grammars” assisted students to learn how meaning was created through selection of resources from the written, visual, sound and gestural modes and apply this knowledge to creating multimodal texts demonstrating their understandings of the topic and how to make meaning in a multimodal mini-documentary.

Research limitations/implications

The research is limited to the outcomes from one group of students in one class. Generalisation to other contexts is not possible. Further studies are required to support the results from this research.

Practical implications

The linguistic and multimodal SFL-informed grammars can be applied by educators to critique multimodal texts in a range of mediums and scaffold students’ production of multimodal texts. They can also inform assessment criteria and expand students’ conception of what is literate practice.

Originality/value

Knowledge of a linguistic and multimodal metalanguage can provide students with the tools to enhance their critical language awareness and critical multimodal awareness.

Details

English Teaching: Practice & Critique, vol. 17 no. 4
Type: Research Article
ISSN: 1175-8708

Keywords

Article
Publication date: 15 June 2021

Runyu Chen

Micro-video platforms have gained attention in recent years and have also become an important new channel for merchants to advertise their products. Since little research has…

Abstract

Purpose

Micro-video platforms have gained attention in recent years and have also become an important new channel for merchants to advertise their products. Since little research has studied micro-video advertising, this paper aims to fill the research gap by exploring the determinants of micro-video advertising clicks. We form a micro-video advertising click prediction model and demonstrate the effectiveness of the multimodal information extracted from the advertisement producers, commodities being sold and micro-video contents in the prediction task.

Design/methodology/approach

A multimodal analysis framework was conducted based on real-world micro-video advertisement datasets. To better capture the relations between different modalities, we adopt a cooperative learning model to predict the advertising clicks.

Findings

The experimental results show that the features extracted from different data sources can improve the prediction performance. Furthermore, the combination of different modal features (visual, acoustic, textual and numerical) is also worth studying. Compared to classical baseline models, the proposed cooperative learning model significantly outperforms the prediction results, which demonstrates that the relations between modalities are also important in advertising micro-video generation.

Originality/value

To the best of our knowledge, this is the first study analysing micro-video advertising effects. With the help of our advertising click prediction model, advertisement producers (merchants or their partners) can benefit from generating more effective micro-video advertisements. Furthermore, micro-video platforms can apply our prediction results to optimise their advertisement allocation algorithm and better manage network traffic. This research can be of great help for more effective development of the micro-video advertisement industry.

Details

Internet Research, vol. 32 no. 2
Type: Research Article
ISSN: 1066-2243

Keywords

Article
Publication date: 3 February 2023

Lizhao Zhang, Jui-Long Hung, Xu Du, Hao Li and Zhuang Hu

Student engagement is a key factor that connects with student achievement and retention. This paper aims to identify individuals' engagement automatically in the classroom with…

Abstract

Purpose

Student engagement is a key factor that connects with student achievement and retention. This paper aims to identify individuals' engagement automatically in the classroom with multimodal data for supporting educational research.

Design/methodology/approach

The video and electroencephalogram data of 36 undergraduates were collected to represent observable and internal information. Since different modal data have different granularity, this study proposed the Fast–Slow Neural Network (FSNN) to detect engagement through both observable and internal information, with an asynchrony structure to preserve the sequence information of data with different granularity.

Findings

Experimental results show that the proposed algorithm can recognize engagement better than the traditional data fusion methods. The results are also analyzed to figure out the reasons for the better performance of the proposed FSNN.

Originality/value

This study combined multimodal data from observable and internal aspects to improve the accuracy of engagement detection in the classroom. The proposed FSNN used the asynchronous process to deal with the problem of remaining sequential information when facing multimodal data with different granularity.

Details

Data Technologies and Applications, vol. 57 no. 3
Type: Research Article
ISSN: 2514-9288

Keywords

Book part
Publication date: 9 May 2017

Rachel Heydon, Zheng Zhang and Beatrix Bocazar

Illustrated through ethnographic data drawn from a case study of a full-day kindergarten in Ontario, Canada, this chapter argues for an approach to inclusive curriculum that…

Abstract

Illustrated through ethnographic data drawn from a case study of a full-day kindergarten in Ontario, Canada, this chapter argues for an approach to inclusive curriculum that places the ethical relation at the center and promotes children’s rights through opportunities for multimodal communication. Theoretically, this case drew on multimodal literacy and ethical curricula. The study used ethnographic tools such as class observations, semi-structured interviews, and collection of children’s work. Findings indicate that responsive, ethical curricula through multimodal pedagogies were intrinsically inclusive of all children’s funds of knowledge and encouraged children to become curricular informants and take control of their choices of meaning making.

Content available
Article
Publication date: 13 November 2023

Sheuli Paul

This paper presents a survey of research into interactive robotic systems for the purpose of identifying the state of the art capabilities as well as the extant gaps in this…

1043

Abstract

Purpose

This paper presents a survey of research into interactive robotic systems for the purpose of identifying the state of the art capabilities as well as the extant gaps in this emerging field. Communication is multimodal. Multimodality is a representation of many modes chosen from rhetorical aspects for its communication potentials. The author seeks to define the available automation capabilities in communication using multimodalities that will support a proposed Interactive Robot System (IRS) as an AI mounted robotic platform to advance the speed and quality of military operational and tactical decision making.

Design/methodology/approach

This review will begin by presenting key developments in the robotic interaction field with the objective of identifying essential technological developments that set conditions for robotic platforms to function autonomously. After surveying the key aspects in Human Robot Interaction (HRI), Unmanned Autonomous System (UAS), visualization, Virtual Environment (VE) and prediction, the paper then proceeds to describe the gaps in the application areas that will require extension and integration to enable the prototyping of the IRS. A brief examination of other work in HRI-related fields concludes with a recapitulation of the IRS challenge that will set conditions for future success.

Findings

Using insights from a balanced cross section of sources from the government, academic, and commercial entities that contribute to HRI a multimodal IRS in military communication is introduced. Multimodal IRS (MIRS) in military communication has yet to be deployed.

Research limitations/implications

Multimodal robotic interface for the MIRS is an interdisciplinary endeavour. This is not realistic that one can comprehend all expert and related knowledge and skills to design and develop such multimodal interactive robotic interface. In this brief preliminary survey, the author has discussed extant AI, robotics, NLP, CV, VDM, and VE applications that is directly related to multimodal interaction. Each mode of this multimodal communication is an active research area. Multimodal human/military robot communication is the ultimate goal of this research.

Practical implications

A multimodal autonomous robot in military communication using speech, images, gestures, VST and VE has yet to be deployed. Autonomous multimodal communication is expected to open wider possibilities for all armed forces. Given the density of the land domain, the army is in a position to exploit the opportunities for human–machine teaming (HMT) exposure. Naval and air forces will adopt platform specific suites for specially selected operators to integrate with and leverage this emerging technology. The possession of a flexible communications means that readily adapts to virtual training will enhance planning and mission rehearsals tremendously.

Social implications

Interaction, perception, cognition and visualization based multimodal communication system is yet missing. Options to communicate, express and convey information in HMT setting with multiple options, suggestions and recommendations will certainly enhance military communication, strength, engagement, security, cognition, perception as well as the ability to act confidently for a successful mission.

Originality/value

The objective is to develop a multimodal autonomous interactive robot for military communications. This survey reports the state of the art, what exists and what is missing, what can be done and possibilities of extension that support the military in maintaining effective communication using multimodalities. There are some separate ongoing progresses, such as in machine-enabled speech, image recognition, tracking, visualizations for situational awareness, and virtual environments. At this time, there is no integrated approach for multimodal human robot interaction that proposes a flexible and agile communication. The report briefly introduces the research proposal about multimodal interactive robot in military communication.

Article
Publication date: 13 May 2019

Maggie Struck and Stephanie Rollag Yoon

The purpose of this paper is to explore how preservice teacher’s beliefs change over time in a literacy methods elementary licensure course that encourages critical literacy and…

Abstract

Purpose

The purpose of this paper is to explore how preservice teacher’s beliefs change over time in a literacy methods elementary licensure course that encourages critical literacy and connects learning. The authors were interested in the interplay among identity, agency and structure within this process and how this connected with other literature on teacher beliefs and technology use.

Design/methodology/approach

Utilizing data from a larger ethnographic study and mediated discourse analysis (Scollon and Scollon, 2004), this paper follows preservice teacher’s use of digital tools and beliefs about using digital tools in the classroom over a semester-long hybrid course.

Findings

Findings show changes in preservice teacher’s beliefs about technology use, interest-driven learning and her own agency. These changes were influenced by the framework of the course and course practices.

Research limitations/implications

This research study offers practical ways to support preservice teachers’ implementation of digital tools with an emphasis on equity. Ultimately, preservice teachers’ experience shapes the opportunities students have with digital tools in schools.

Practical implications

Recognizing the competing discourses and pressures preservice teachers’ experience, the results of this study offer tools to support preservice teachers’ agency through the implementation of connected learning principles and critical literacy theories in preservice education courses, leading to the potential to expand equity in school settings.

Originality/value

While there is research around connected learning in classrooms, there is limited research on a connected learning framework in preservice education programs. Additionally, this paper brings a new perspective on how pairing an emphasis of equity to a connected learning framework supports teachers’ implementation of digital tools.

Details

The International Journal of Information and Learning Technology, vol. 36 no. 5
Type: Research Article
ISSN: 2056-4880

Keywords

Book part
Publication date: 17 September 2018

Bridget Dalton and Kirsten Musetti

Purpose – The purpose is to expand multimodal composition frameworks and practices to include tactile design and use of maker technologies, situated in a larger context of…

Abstract

Purpose – The purpose is to expand multimodal composition frameworks and practices to include tactile design and use of maker technologies, situated in a larger context of designing for equity and increasing access to picture books for children with visual impairments.

Design – As part of the Build a Better Book project, we designed workshops to engage students in composing tactile books enhanced with sound and Braille for young children with visual impairments. Education undergraduates in a children’s literature class crafted tactile retellings over a 2-session workshop, and high school students in an ELA class designed and fabricated 3D printed tactile books over several weeks.

Findings – Both pre-service candidates and high school students developed awareness of the importance of inclusive, equity-oriented design of picture books, and especially for children with visual impairments. They collaborated in teams, developing design skills manipulating texture, shape, size and spatial arrangement to express their tactile retellings and enhanced meaning with sound. The high school students had more opportunity to build technical and computational thinking through their use of Makey Makey, Scratch, and TinkerCad.

Practical ImplicationsMultimodal composition and making can be effectively integrated into pre-service candidates’ literacy education, as well as high school English Language Arts, to develop multimodal communication and inclusive design skills and values. Success depends on interdisciplinary expertise (e.g., children’s books, tactile design, making technologies, etc.), and sufficient access to physical and digital materials and tools.

Details

Best Practices in Teaching Digital Literacies
Type: Book
ISBN: 978-1-78754-434-5

Keywords

Article
Publication date: 1 November 2023

Juan Yang, Zhenkun Li and Xu Du

Although numerous signal modalities are available for emotion recognition, audio and visual modalities are the most common and predominant forms for human beings to express their…

Abstract

Purpose

Although numerous signal modalities are available for emotion recognition, audio and visual modalities are the most common and predominant forms for human beings to express their emotional states in daily communication. Therefore, how to achieve automatic and accurate audiovisual emotion recognition is significantly important for developing engaging and empathetic human–computer interaction environment. However, two major challenges exist in the field of audiovisual emotion recognition: (1) how to effectively capture representations of each single modality and eliminate redundant features and (2) how to efficiently integrate information from these two modalities to generate discriminative representations.

Design/methodology/approach

A novel key-frame extraction-based attention fusion network (KE-AFN) is proposed for audiovisual emotion recognition. KE-AFN attempts to integrate key-frame extraction with multimodal interaction and fusion to enhance audiovisual representations and reduce redundant computation, filling the research gaps of existing approaches. Specifically, the local maximum–based content analysis is designed to extract key-frames from videos for the purpose of eliminating data redundancy. Two modules, including “Multi-head Attention-based Intra-modality Interaction Module” and “Multi-head Attention-based Cross-modality Interaction Module”, are proposed to mine and capture intra- and cross-modality interactions for further reducing data redundancy and producing more powerful multimodal representations.

Findings

Extensive experiments on two benchmark datasets (i.e. RAVDESS and CMU-MOSEI) demonstrate the effectiveness and rationality of KE-AFN. Specifically, (1) KE-AFN is superior to state-of-the-art baselines for audiovisual emotion recognition. (2) Exploring the supplementary and complementary information of different modalities can provide more emotional clues for better emotion recognition. (3) The proposed key-frame extraction strategy can enhance the performance by more than 2.79 per cent on accuracy. (4) Both exploring intra- and cross-modality interactions and employing attention-based audiovisual fusion can lead to better prediction performance.

Originality/value

The proposed KE-AFN can support the development of engaging and empathetic human–computer interaction environment.

1 – 10 of over 2000