Search results
1 – 10 of 810Kanokwan Srisupornkornkool, Kanphajee Sornkaew, Kittithat Chatkanjanakool, Chayanit Ampairattana, Pariyanoot Pongtasom, Sompiya Somthavil, Onuma Boonyarom, Kornanong Yuenyongchaiwat and Khajonsak Pongpanit
To compare the electromyography (EMG) features during physical and imagined standing up in healthy young adults.
Abstract
Purpose
To compare the electromyography (EMG) features during physical and imagined standing up in healthy young adults.
Design/methodology/approach
Twenty-two participants (ages ranged from 20–29 years old) were recruited to participate in this study. Electrodes were attached to the rectus femoris, biceps femoris, tibialis anterior and the medial gastrocnemius muscles of both sides to monitor the EMG features during physical and imagined standing up. The %maximal voluntary contraction (%MVC), onset and duration were calculated.
Findings
The onset and duration of each muscle of both sides had no statistically significant differences between physical and imagined standing up (p > 0.05). The %MVC of all four muscles during physical standing up was statistically significantly higher than during imagined standing up (p < 0.05) on both sides. Moreover, the tibialis anterior muscle of both sides showed a statistically significant contraction before the other muscles (p < 0.05) during physical and imagined standing up.
Originality/value
Muscles can be activated during imagined movement, and the patterns of muscle activity during physical and imagined standing up were similar. Imagined movement may be used in rehabilitation as an alternative or additional technique combined with other techniques to enhance the STS skill.
Details
Keywords
Recently, the convolutional neural network (ConvNet) has a wide application in the classification of motor imagery EEG signals. However, the low signal-to-noise…
Abstract
Purpose
Recently, the convolutional neural network (ConvNet) has a wide application in the classification of motor imagery EEG signals. However, the low signal-to-noise electroencephalogram (EEG) signals are collected under the interference of noises. However, the conventional ConvNet model cannot directly solve this problem. This study aims to discuss the aforementioned issues.
Design/methodology/approach
To solve this problem, this paper adopted a novel residual shrinkage block (RSB) to construct the ConvNet model (RSBConvNet). During the feature extraction from EEG signals, the proposed RSBConvNet prevented the noise component in EEG signals, and improved the classification accuracy of motor imagery. In the construction of RSBConvNet, the author applied the soft thresholding strategy to prevent the non-related motor imagery features in EEG signals. The soft thresholding was inserted into the residual block (RB), and the suitable threshold for the current EEG signals distribution can be learned by minimizing the loss function. Therefore, during the feature extraction of motor imagery, the proposed RSBConvNet de-noised the EEG signals and improved the discriminative of classification features.
Findings
Comparative experiments and ablation studies were done on two public benchmark datasets. Compared with conventional ConvNet models, the proposed RSBConvNet model has obvious improvements in motor imagery classification accuracy and Kappa coefficient. Ablation studies have also shown the de-noised abilities of the RSBConvNet model. Moreover, different parameters and computational methods of the RSBConvNet model have been tested on the classification of motor imagery.
Originality/value
Based on the experimental results, the RSBConvNet constructed in this paper has an excellent recognition accuracy of MI-BCI, which can be used for further applications for the online MI-BCI.
Details
Keywords
In order to improve the weak recognition accuracy and robustness of the classification algorithm for brain-computer interface (BCI), this paper proposed a novel classification…
Abstract
Purpose
In order to improve the weak recognition accuracy and robustness of the classification algorithm for brain-computer interface (BCI), this paper proposed a novel classification algorithm for motor imagery based on temporal and spatial characteristics extracted by using convolutional neural networks (TS-CNN) model.
Design/methodology/approach
According to the proposed algorithm, a five-layer neural network model was constructed to classify the electroencephalogram (EEG) signals. Firstly, the author designed a motor imagery-based BCI experiment, and four subjects were recruited to participate in the experiment for the recording of EEG signals. Then, after the EEG signals were preprocessed, the temporal and spatial characteristics of EEG signals were extracted by longitudinal convolutional kernel and transverse convolutional kernels, respectively. Finally, the classification of motor imagery was completed by using two fully connected layers.
Findings
To validate the classification performance and efficiency of the proposed algorithm, the comparative experiments with the state-of-the-arts algorithms are applied to validate the proposed algorithm. Experimental results have shown that the proposed TS-CNN model has the best performance and efficiency in the classification of motor imagery, reflecting on the introduced accuracy, precision, recall, ROC curve and F-score indexes.
Originality/value
The proposed TS-CNN model accurately recognized the EEG signals for different tasks of motor imagery, and provided theoretical basis and technical support for the application of BCI control system in the field of rehabilitation exoskeleton.
Details
Keywords
Mike Knudstrup, Sharon L. Segrest and Amy E. Hurley
In this study, interviewees in the training group were instructed to use mental imagery techniques in a simulated employment interview. Results indicated that the subjects who…
Abstract
In this study, interviewees in the training group were instructed to use mental imagery techniques in a simulated employment interview. Results indicated that the subjects who used mental imagery had higher performance in the interview and lower perceived stress than the subjects who did not use mental imagery. Mental imagery did not have a significant effect upon perceptions of self‐efficacy. Mental imagery ability had a positive effect on perceived usefulness of mental imagery while controllability and vividness did not. Subjects did indicate positive perceptions of the mental imagery intervention and a willingness to use mental imagery again in the future. The personality variable, “conscientiousness”, had a significant effect in the mental imagery performance relationship.
Details
Keywords
This article suggests that simple imagery and visualization techniques can be used with the mentoring relationship. After identifying the challenge that mentors need strategies to…
Abstract
This article suggests that simple imagery and visualization techniques can be used with the mentoring relationship. After identifying the challenge that mentors need strategies to promote mentee development, the article presents a case for using visualization and shows how this strategy has been used in other contexts. Visualization and imagery is then applied to pre‐service teachers. The article concludes by exploring the potential for the use of visualization by mentors arguing that visualisation could help bring about self‐actualization.
Details
Keywords
Tim Bauerle, Michael J. Brnich and Jason Navoyski
This paper aims to contribute to a general understanding of mental practice by investigating the utility of and participant reaction to a virtual reality maintenance training…
Abstract
Purpose
This paper aims to contribute to a general understanding of mental practice by investigating the utility of and participant reaction to a virtual reality maintenance training among underground coal mine first responders.
Design/methodology/approach
Researchers at the National Institute for Occupational Safety and Health’s Office of Mine Safety and Health Research (OMSHR) developed software to provide opportunities for mine rescue team members to learn to inspect, assemble and test their closed-circuit breathing apparatus and to practice those skills. In total, 31 mine rescue team members utilized OMSHR’s BG 4 Benching Trainer software and provided feedback to the development team. After training, participants completed a brief post-training questionnaire, which included demographics, perceived training climate and general training evaluation items.
Findings
The results overall indicate a generally positive reaction to and high perceived utility of the BG 4 benching software. In addition, the perceived training climate appears to have an effect on the perceived utility of the mental practice virtual reality game, with benchmen from mines with more positive training climates reporting greater perceived efficacy in the training’s ability to prepare trainees for real emergencies.
Originality/value
This paper helps to broaden current applications of mental practice and is one of the few empirical investigations into a non-rehabilitation virtual reality extension of mental practice. This paper also contributes to the growing literature advocating for greater usage of accurate and well-informed mental practice techniques, tools and methodologies, especially for occupational populations with limitations on exposure to hands-on training.
Details
Keywords
Minghua Wei and Feng Lin
Aiming at the shortcomings of EEG signals generated by brain's sensorimotor region activated tasks, such as poor performance, low efficiency and weak robustness, this paper…
Abstract
Purpose
Aiming at the shortcomings of EEG signals generated by brain's sensorimotor region activated tasks, such as poor performance, low efficiency and weak robustness, this paper proposes an EEG signals classification method based on multi-dimensional fusion features.
Design/methodology/approach
First, the improved Morlet wavelet is used to extract the spectrum feature maps from EEG signals. Then, the spatial-frequency features are extracted from the PSD maps by using the three-dimensional convolutional neural networks (3DCNNs) model. Finally, the spatial-frequency features are incorporated to the bidirectional gated recurrent units (Bi-GRUs) models to extract the spatial-frequency-sequential multi-dimensional fusion features for recognition of brain's sensorimotor region activated task.
Findings
In the comparative experiments, the data sets of motor imagery (MI)/action observation (AO)/action execution (AE) tasks are selected to test the classification performance and robustness of the proposed algorithm. In addition, the impact of extracted features on the sensorimotor region and the impact on the classification processing are also analyzed by visualization during experiments.
Originality/value
The experimental results show that the proposed algorithm extracts the corresponding brain activation features for different action related tasks, so as to achieve more stable classification performance in dealing with AO/MI/AE tasks, and has the best robustness on EEG signals of different subjects.
Details
Keywords
Chongyang Chen, Kem Z.K. Zhang, Zhaofang Chu and Matthew Lee
In the growing information systems (IS) literature on metaverse, augmented reality (AR) technology is regarded as a cornerstone of the metaverse which enables interaction…
Abstract
Purpose
In the growing information systems (IS) literature on metaverse, augmented reality (AR) technology is regarded as a cornerstone of the metaverse which enables interaction services. Interaction has been identified as a core technology characteristic of metaverse shopping environments. Based on previous human–technology interaction research, the authors further explicate interaction to be multimodal sensory. The purpose of this study is thus to better understand the unique nature of interaction in AR technology and highlight the technology's benefits for shopping in metaverse spaces.
Design/methodology/approach
An experiment has been conducted to empirically examine the authors' research model. The authors use the structural equation modeling (SEM) approach to analyze the collected data.
Findings
This study conceptualizes image, motion and touchscreen interactions as the three dimensions of multimodal sensory interaction, which can reflect visual-, kinesthetic- and haptic-based sensation stimulation. The authors' findings show that multimodal sensory interaction of AR activates consumers' intention to purchase via a psychological process. To delineate this psychological process, the authors use feelings-as-information theory to posit that experiential factors can influence cognitive factors. More specifically, multimodal sensory interaction is shown to increase multisensory experience and spatial presence, which can effectively reduce product uncertainty and information overload. The two outcomes have been considered to be key issues in online shopping environments.
Originality/value
This study is one of the first ones that shed light on the multimodal sensory peculiarity of AR interactions in the extant IS literature. The authors further highlight the benefits of AR in addressing major online shopping concerns about product uncertainty and information overload, which are largely overlooked by prior research. This study uses feelings-as-information theory to explain the impacts of AR interactions, which reveal the essential role of the experiential process in sensory-enabling technologies. This study enriches the existing theoretical frameworks that mostly focus on the cognitive process. The authors' findings about AR interactions provide noteworthy guidelines for the design of metaverse environments and extend the authors' understanding of how the metaverse may bring benefits beyond traditional online shopping settings.
Details
Keywords
Thalia Anthony, Juanita Sherwood, Harry Blagg and Kieran Tranter
Giuseppe Gillini, Paolo Di Lillo, Filippo Arrichiello, Daniele Di Vito, Alessandro Marino, Gianluca Antonelli and Stefano Chiaverini
In the past decade, more than 700 million people are affected by some kind of disability or handicap. In this context, the research interest in assistive robotics is growing up…
Abstract
Purpose
In the past decade, more than 700 million people are affected by some kind of disability or handicap. In this context, the research interest in assistive robotics is growing up. For people with mobility impairments, daily life operations, as dressing or feeding, require the assistance of dedicated people; thus, the use of devices providing independent mobility can have a large impact on improving their life quality. The purpose of this paper is to present the development of a robotic system aimed at assisting people with this kind of severe motion disabilities by providing a certain level of autonomy.
Design/methodology/approach
The system is based on a hierarchical architecture where, at the top level, the user generates simple and high-level commands by resorting to a graphical user interface operated via a P300-based brain computer interface. These commands are ultimately converted into joint and Cartesian space tasks for the robotic system that are then handled by the robot motion control algorithm resorting to a set-based task priority inverse kinematic strategy. The overall architecture is realized by integrating control and perception software modules developed in the robots and systems environment with the BCI2000 framework, used to operate the brain–computer interfaces (BCI) device.
Findings
The effectiveness of the proposed architecture is validated through experiments where a user generates commands, via an Emotiv Epoc+ BCI, to perform assistive tasks that are executed by a Kinova MOVO robot, i.e. an omnidirectional mobile robotic platform equipped with two lightweight seven degrees of freedoms manipulators.
Originality/value
The P300 paradigm has been successfully integrated with a control architecture that allows us to command a complex robotic system to perform daily life operations. The user defines high-level commands via the BCI, letting all the low-level tasks, for example, safety-related tasks, to be handled by the system in a completely autonomous manner.
Details