Search results

1 – 10 of over 12000
Open Access
Article
Publication date: 18 November 2021

Shin'ichiro Ishikawa

Using a newly compiled corpus module consisting of utterances from Asian learners during L2 English interviews, this study examined how Asian EFL learners' L1s (Chinese…

Abstract

Purpose

Using a newly compiled corpus module consisting of utterances from Asian learners during L2 English interviews, this study examined how Asian EFL learners' L1s (Chinese, Indonesian, Japanese, Korean, Taiwanese and Thai), their L2 proficiency levels (A2, B1 low, B1 upper and B2+) and speech task types (picture descriptions, roleplays and QA-based conversations) affected four aspects of vocabulary usage (number of tokens, standardized type/token ratio, mean word length and mean sentence length).

Design/methodology/approach

Four aspects concern speech fluency, lexical richness, lexical complexity and structural complexity, respectively.

Findings

Subsequent corpus-based quantitative data analyses revealed that (1) learner/native speaker differences existed during the conversation and roleplay tasks in terms of the number of tokens, type/token ratio and sentence length; (2) an L1 group effect existed in all three task types in terms of the number of tokens and sentence length; (3) an L2 proficiency effect existed in all three task types in terms of the number of tokens, type-token ratio and sentence length; and (4) the usage of high-frequency vocabulary was influenced more strongly by the task type and it was classified into four types: Type A vocabulary for grammar control, Type B vocabulary for speech maintenance, Type C vocabulary for negotiation and persuasion and Type D vocabulary for novice learners.

Originality/value

These findings provide clues for better understanding L2 English vocabulary usage among Asian learners during speech.

Details

PSU Research Review, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 2399-1747

Keywords

Book part
Publication date: 13 July 2016

Catherine J. Taylor, Laura Freeman, Daniel Olguin Olguin and Taemie Kim

In this project, we propose and test a new device – wearable sociometric badges containing small microphones – as a low-cost and relatively unobtrusive tool for measuring stress…

Abstract

Purpose

In this project, we propose and test a new device – wearable sociometric badges containing small microphones – as a low-cost and relatively unobtrusive tool for measuring stress response to group processes. Specifically, we investigate whether voice pitch, measured using the microphone of the sociometric badge, is associated with physiological stress response to group processes.

Methodology

We collect data in a laboratory setting using participants engaged in two types of small-group interactions: a social interaction and a problem-solving task. We examine the association between voice pitch (measured by fundamental frequency of the participant’s speech) and physiological stress response (measured using salivary cortisol) in these two types of small-group interactions.

Findings

We find that in the social task, participants who exhibit a stress response have a statistically significant greater deviation in voice pitch (from their overall average voice pitch) than those who do not exhibit a stress response. In the problem-solving task, participants who exhibit a stress response also have a greater deviation in voice pitch than those who do not exhibit a stress response, however, in this case, the results are only marginally significant. In both tasks, among participants who exhibited a stress response, we find a statistically significant correlation between physiological stress response and deviation in voice pitch.

Practical and research implications

We conclude that wearable microphones have the potential to serve as cheap and unobtrusive tools for measuring stress response to group processes.

Details

Advances in Group Processes
Type: Book
ISBN: 978-1-78635-041-1

Keywords

Article
Publication date: 9 January 2009

J. Norberto Pires, Germano Veiga and Ricardo Araújo

The purpose of this paper is to report a collection of developments that enable users to program industrial robots using speech, several device interfaces, force control and code…

Abstract

Purpose

The purpose of this paper is to report a collection of developments that enable users to program industrial robots using speech, several device interfaces, force control and code generation techniques.

Design/methodology/approach

The reported system is explained in detail and a few practical examples are given that demonstrate its usefulness for small to medium‐sized enterprises (SMEs), where robots and humans need to cooperate to achieve a common goal (coworker scenario). The paper also explores the user interface software adapted for use by non‐experts.

Findings

The programming‐by‐demonstration (PbD) system presented proved to be very efficient with the task of programming entirely new features to an industrial robotic system. The system uses a speech interface for user command, and a force‐controlled guiding system for teaching the robot the details about the task being programmed. With only a small set of implemented robot instructions it was fairly easy to teach the robot system a new task, generate the robot code and execute it immediately.

Research limitations/implications

Although a particular robot controller was used, the system is in many aspects general, since the options adopted are mainly based on standards. It can obviously be implemented with other robot controllers without significant changes. In fact, most of the features were ported to run with Motoman robots with success.

Practical implications

It is important to stress that the robot program built in this section was obtained without writing a single line of code, but instead just by moving the robot to the desired positions and adding the required robot instructions using speech. Even the upload task of the obtained module to the robot controller is commanded by speech, along with its execution/termination. Consequently, teaching the robotic system a new feature is accessible for any type of user with only minor training.

Originality/value

This type of PbD systems will constitute a major advantage for SMEs, since most of those companies do not have the necessary engineering resources to make changes or add new functionalities to their robotic manufacturing systems. Even at the system integrator level these systems are very useful for avoiding the need for specific knowledge about all the controllers with which they work: complexity is hidden beyond the speech interfaces and portable interface devices, with specific and user‐friendly APIs making the connection between the programmer and the system.

Details

Industrial Robot: An International Journal, vol. 36 no. 1
Type: Research Article
ISSN: 0143-991X

Keywords

Article
Publication date: 3 April 2017

Yuhki Shiraishi, Jianwei Zhang, Daisuke Wakatsuki, Katsumi Kumai and Atsuyuki Morishima

The purpose of this paper is to explore the issues on how to achieve crowdsourced real-time captioning of sign language by deaf and hard-of-hearing (DHH) people, such that how a…

Abstract

Purpose

The purpose of this paper is to explore the issues on how to achieve crowdsourced real-time captioning of sign language by deaf and hard-of-hearing (DHH) people, such that how a system structure should be designed, how a continuous task of sign language captioning should be divided into microtasks and how many DHH people are required to maintain a high-quality real-time captioning.

Design/methodology/approach

The authors first propose a system structure, including the new design of worker roles, task division and task assignment. Then, based on an implemented prototype, the authors analyze the necessary setting for achieving a crowdsourced real-time captioning of sign language, test the feasibility of the proposed system and explore its robustness and improvability through four experiments.

Findings

The results of Experiment 1 have revealed the optimal method for task division, the necessary minimum number of groups and the necessary minimum number of workers in a group. The results of Experiment 2 have verified the feasibility of the crowdsourced real-time captioning of sign language by DHH people. The results of Experiment 3 and Experiment 4 have shown the robustness and improvability of the captioning system.

Originality/value

Although some crowdsourcing-based systems have been developed for the captioning of voice to text, the authors intend to resolve the issues on the captioning of sign language to text, for which the existing approaches do not work well due to the unique properties of sign language. Moreover, DHH people are generally considered as the ones who receive support from others, but our proposal helps them become the ones who offer support to others.

Details

International Journal of Pervasive Computing and Communications, vol. 13 no. 1
Type: Research Article
ISSN: 1742-7371

Keywords

Content available
Article
Publication date: 13 November 2023

Sheuli Paul

This paper presents a survey of research into interactive robotic systems for the purpose of identifying the state of the art capabilities as well as the extant gaps in this…

1037

Abstract

Purpose

This paper presents a survey of research into interactive robotic systems for the purpose of identifying the state of the art capabilities as well as the extant gaps in this emerging field. Communication is multimodal. Multimodality is a representation of many modes chosen from rhetorical aspects for its communication potentials. The author seeks to define the available automation capabilities in communication using multimodalities that will support a proposed Interactive Robot System (IRS) as an AI mounted robotic platform to advance the speed and quality of military operational and tactical decision making.

Design/methodology/approach

This review will begin by presenting key developments in the robotic interaction field with the objective of identifying essential technological developments that set conditions for robotic platforms to function autonomously. After surveying the key aspects in Human Robot Interaction (HRI), Unmanned Autonomous System (UAS), visualization, Virtual Environment (VE) and prediction, the paper then proceeds to describe the gaps in the application areas that will require extension and integration to enable the prototyping of the IRS. A brief examination of other work in HRI-related fields concludes with a recapitulation of the IRS challenge that will set conditions for future success.

Findings

Using insights from a balanced cross section of sources from the government, academic, and commercial entities that contribute to HRI a multimodal IRS in military communication is introduced. Multimodal IRS (MIRS) in military communication has yet to be deployed.

Research limitations/implications

Multimodal robotic interface for the MIRS is an interdisciplinary endeavour. This is not realistic that one can comprehend all expert and related knowledge and skills to design and develop such multimodal interactive robotic interface. In this brief preliminary survey, the author has discussed extant AI, robotics, NLP, CV, VDM, and VE applications that is directly related to multimodal interaction. Each mode of this multimodal communication is an active research area. Multimodal human/military robot communication is the ultimate goal of this research.

Practical implications

A multimodal autonomous robot in military communication using speech, images, gestures, VST and VE has yet to be deployed. Autonomous multimodal communication is expected to open wider possibilities for all armed forces. Given the density of the land domain, the army is in a position to exploit the opportunities for human–machine teaming (HMT) exposure. Naval and air forces will adopt platform specific suites for specially selected operators to integrate with and leverage this emerging technology. The possession of a flexible communications means that readily adapts to virtual training will enhance planning and mission rehearsals tremendously.

Social implications

Interaction, perception, cognition and visualization based multimodal communication system is yet missing. Options to communicate, express and convey information in HMT setting with multiple options, suggestions and recommendations will certainly enhance military communication, strength, engagement, security, cognition, perception as well as the ability to act confidently for a successful mission.

Originality/value

The objective is to develop a multimodal autonomous interactive robot for military communications. This survey reports the state of the art, what exists and what is missing, what can be done and possibilities of extension that support the military in maintaining effective communication using multimodalities. There are some separate ongoing progresses, such as in machine-enabled speech, image recognition, tracking, visualizations for situational awareness, and virtual environments. At this time, there is no integrated approach for multimodal human robot interaction that proposes a flexible and agile communication. The report briefly introduces the research proposal about multimodal interactive robot in military communication.

Book part
Publication date: 13 June 2013

Li Xiao, Hye-jin Kim and Min Ding

Purpose – The advancement of multimedia technology has spurred the use of multimedia in business practice. The adoption of audio and visual data will accelerate as marketing…

Abstract

Purpose – The advancement of multimedia technology has spurred the use of multimedia in business practice. The adoption of audio and visual data will accelerate as marketing scholars become more aware of the value of audio and visual data and the technologies required to reveal insights into marketing problems. This chapter aims to introduce marketing scholars into this field of research.Design/methodology/approach – This chapter reviews the current technology in audio and visual data analysis and discusses rewarding research opportunities in marketing using these data.Findings – Compared with traditional data like survey and scanner data, audio and visual data provides richer information and is easier to collect. Given these superiority, data availability, feasibility of storage, and increasing computational power, we believe that these data will contribute to better marketing practices with the help of marketing scholars in the near future.Practical implications: The adoption of audio and visual data in marketing practices will help practitioners to get better insights into marketing problems and thus make better decisions.Value/originality – This chapter makes first attempt in the marketing literature to review the current technology in audio and visual data analysis and proposes promising applications of such technology. We hope it will inspire scholars to utilize audio and visual data in marketing research.

Details

Review of Marketing Research
Type: Book
ISBN: 978-1-78190-761-0

Keywords

Article
Publication date: 3 November 2020

Femi Emmanuel Ayo, Olusegun Folorunso, Friday Thomas Ibharalu and Idowu Ademola Osinuga

Hate speech is an expression of intense hatred. Twitter has become a popular analytical tool for the prediction and monitoring of abusive behaviors. Hate speech detection with…

Abstract

Purpose

Hate speech is an expression of intense hatred. Twitter has become a popular analytical tool for the prediction and monitoring of abusive behaviors. Hate speech detection with social media data has witnessed special research attention in recent studies, hence, the need to design a generic metadata architecture and efficient feature extraction technique to enhance hate speech detection.

Design/methodology/approach

This study proposes a hybrid embeddings enhanced with a topic inference method and an improved cuckoo search neural network for hate speech detection in Twitter data. The proposed method uses a hybrid embeddings technique that includes Term Frequency-Inverse Document Frequency (TF-IDF) for word-level feature extraction and Long Short Term Memory (LSTM) which is a variant of recurrent neural networks architecture for sentence-level feature extraction. The extracted features from the hybrid embeddings then serve as input into the improved cuckoo search neural network for the prediction of a tweet as hate speech, offensive language or neither.

Findings

The proposed method showed better results when tested on the collected Twitter datasets compared to other related methods. In order to validate the performances of the proposed method, t-test and post hoc multiple comparisons were used to compare the significance and means of the proposed method with other related methods for hate speech detection. Furthermore, Paired Sample t-Test was also conducted to validate the performances of the proposed method with other related methods.

Research limitations/implications

Finally, the evaluation results showed that the proposed method outperforms other related methods with mean F1-score of 91.3.

Originality/value

The main novelty of this study is the use of an automatic topic spotting measure based on naïve Bayes model to improve features representation.

Details

International Journal of Intelligent Computing and Cybernetics, vol. 13 no. 4
Type: Research Article
ISSN: 1756-378X

Keywords

Book part
Publication date: 16 August 2005

Alison R. Fragale

The verbal and nonverbal behaviors that individuals display (i.e., their communication styles) influence the status positions they attain in their task groups. Prior research has…

Abstract

The verbal and nonverbal behaviors that individuals display (i.e., their communication styles) influence the status positions they attain in their task groups. Prior research has generally concluded that communication behaviors that convey agency (i.e., characteristics denoting intelligence, ambition, and dominance) are more effective for obtaining a high-status position in a task group than communication behaviors that convey communality (i.e., characteristics denoting warmth, sincerity, and agreeableness). The message from these prior studies is that it is more status enhancing to be smart than to be social. The objective of this chapter is to challenge this assertion and argue that in some task groups it may be more status enhancing to be social rather than to be smart. I suggest that the status benefits of particular communication styles depend on the characteristics of the group to which an individual belongs to. Thus, in contrast to prior research in this area, I argue for a more contextual approach to the study of communication styles and status conferral, focusing on how structural and process differences between groups influence how the group members’ words and actions are evaluated.

Details

Status and Groups
Type: Book
ISBN: 978-1-84950-358-7

Article
Publication date: 1 February 2002

Oluremi B. Ayoko, Charmine E.J. Härtel and Victor J. Callan

This study presents an investigation of the communicative behaviors and strategies employed in the stimulation and management of productive and destructive conflict in culturally…

3682

Abstract

This study presents an investigation of the communicative behaviors and strategies employed in the stimulation and management of productive and destructive conflict in culturally heterogeneous workgroups. Using communication accommodation theory (CAT), we argue that the type and course of conflict in culturally heterogeneous workgroups is impacted by the communicative behaviors and strategies employed by group members during interactions. Analysis of data from participant observations, non‐participant observations, semi‐structured interviews, and self‐report questionnaires support CAT‐based predictions and provide fresh insights into the triggers and management strategies associated with conflict in culturally heterogeneous workgroups. In particular, results indicated that the more groups used discourse management strategies, the more they experienced productive conflict. In addition, the use of explanation and checking of own and others' understanding was a major feature of productive conflict, while speech interruptions emerged as a strategy leading to potential destructive conflict. Groups where leaders emerged and assisted in reversing communication breakdowns were better able to manage their discourse, and achieved consensus on task processes. Contributions to the understanding of the triggers and the management of productive conflict in culturally heterogeneous workgroups are discussed.

Details

International Journal of Conflict Management, vol. 13 no. 2
Type: Research Article
ISSN: 1044-4068

Article
Publication date: 1 May 2006

Mike Wald

Lectures can be digitally recorded and replayed to provide multimedia revision material for students who attended the class and a substitute learning experience for students…

Abstract

Lectures can be digitally recorded and replayed to provide multimedia revision material for students who attended the class and a substitute learning experience for students unable to attend. Deaf and hard of hearing people can find it difficult to follow speech through hearing alone or to take notes while they are lip‐reading or watching a sign‐language interpreter. Notetakers can only summarise what is being said while qualified sign language interpreters with a good understanding of the relevant higher education subject content are in very scarce supply. Synchronising the speech with text captions can ensure deaf students are not disadvantaged and assist all learners to search for relevant specific parts of the multimedia recording by means of the synchronised text. Real time stenography transcription is not normally available in UK higher education because of the shortage of stenographers wishing to work in universities. Captions are time consuming and expensive to create by hand and while Automatic Speech Recognition can be used to provide real time captioning directly from lecturers’ speech in classrooms it has proved difficult to obtain accuracy comparable to stenography. This paper describes the development of a system that enables editors to correct errors in the captions as they are created by Automatic Speech Recognition.

Details

Interactive Technology and Smart Education, vol. 3 no. 2
Type: Research Article
ISSN: 1741-5659

Keywords

1 – 10 of over 12000