Search results
1 – 10 of 111Xiaochun Guan, Sheng Lou, Han Li and Tinglong Tang
Deployment of deep neural networks on embedded devices is becoming increasingly popular because it can reduce latency and energy consumption for data communication. This paper…
Abstract
Purpose
Deployment of deep neural networks on embedded devices is becoming increasingly popular because it can reduce latency and energy consumption for data communication. This paper aims to give out a method for deployment the deep neural networks on a quad-rotor aircraft for further expanding its application scope.
Design/methodology/approach
In this paper, a design scheme is proposed to implement the flight mission of the quad-rotor aircraft based on multi-sensor fusion. It integrates attitude acquisition module, global positioning system position acquisition module, optical flow sensor, ultrasonic sensor and Bluetooth communication module, etc. A 32-bit microcontroller is adopted as the main controller for the quad-rotor aircraft. To make the quad-rotor aircraft be more intelligent, the study also proposes a method to deploy the pre-trained deep neural networks model on the microcontroller based on the software packages of the RT-Thread internet of things operating system.
Findings
This design provides a simple and efficient design scheme to further integrate artificial intelligence (AI) algorithm for the control system design of quad-rotor aircraft.
Originality/value
This method provides an application example and a design reference for the implementation of AI algorithms on unmanned aerial vehicle or terminal robots.
Details
Keywords
In the digital age, organizations want to build a more powerful machine learning model that can serve the increasing needs of people. However, enhancing privacy and data security…
Abstract
Purpose
In the digital age, organizations want to build a more powerful machine learning model that can serve the increasing needs of people. However, enhancing privacy and data security is one of the challenges for machine learning models, especially in federated learning. Parties want to collaborate with each other to build a better model, but they do not want to reveal their own data. This study aims to introduce threats and defenses to privacy leaks in the collaborative learning model.
Design/methodology/approach
In the collaborative model, the attacker was the central server or a participant. In this study, the attacker is on the side of the participant, who is “honest but curious.” Attack experiments are on the participant’s side, who performs two tasks: one is to train the collaborative learning model; the second task is to build a generative adversarial networks (GANs) model, which will perform the attack to infer more information received from the central server. There are three typical types of attacks: white box, black box without auxiliary information and black box with auxiliary information. The experimental environment is set up by PyTorch on Google Colab platform running on graphics processing unit with labeled faces in the wild and Canadian Institute For Advanced Research-10 data sets.
Findings
The paper assumes that the privacy leakage attack resides on the participant’s side, and the information in the parameter server contains too much knowledge to train a collaborative machine learning model. This study compares the success level of inference attack from model parameters based on GAN models. There are three GAN models, which are used in this method: condition GAN, control GAN and Wasserstein generative adversarial networks (WGAN). Of these three models, the WGAN model has proven to obtain the highest stability.
Originality/value
The concern about privacy and security for machine learning models are more important, especially for collaborative learning. The paper has contributed experimentally to private attack on the participant side in the collaborative learning model.
Details
Keywords
Nehemia Sugianto, Dian Tjondronegoro, Rosemary Stockdale and Elizabeth Irenne Yuwono
The paper proposes a privacy-preserving artificial intelligence-enabled video surveillance technology to monitor social distancing in public spaces.
Abstract
Purpose
The paper proposes a privacy-preserving artificial intelligence-enabled video surveillance technology to monitor social distancing in public spaces.
Design/methodology/approach
The paper proposes a new Responsible Artificial Intelligence Implementation Framework to guide the proposed solution's design and development. It defines responsible artificial intelligence criteria that the solution needs to meet and provides checklists to enforce the criteria throughout the process. To preserve data privacy, the proposed system incorporates a federated learning approach to allow computation performed on edge devices to limit sensitive and identifiable data movement and eliminate the dependency of cloud computing at a central server.
Findings
The proposed system is evaluated through a case study of monitoring social distancing at an airport. The results discuss how the system can fully address the case study's requirements in terms of its reliability, its usefulness when deployed to the airport's cameras, and its compliance with responsible artificial intelligence.
Originality/value
The paper makes three contributions. First, it proposes a real-time social distancing breach detection system on edge that extends from a combination of cutting-edge people detection and tracking algorithms to achieve robust performance. Second, it proposes a design approach to develop responsible artificial intelligence in video surveillance contexts. Third, it presents results and discussion from a comprehensive evaluation in the context of a case study at an airport to demonstrate the proposed system's robust performance and practical usefulness.
Details
Keywords
Yaqi Liu, Shuzhen Fang, Lingyu Wang, Chong Huan and Ruixue Wang
In recent years, personalized recommendations have facilitated easy access to users' personal information and historical interactions, thereby improving recommendation…
Abstract
Purpose
In recent years, personalized recommendations have facilitated easy access to users' personal information and historical interactions, thereby improving recommendation effectiveness. However, due to privacy risk concerns, it is essential to balance the accuracy of personalized recommendations with privacy protection. Accordingly, this paper aims to propose a neural graph collaborative filtering personalized recommendation framework based on federated transfer learning (FTL-NGCF), which achieves high-quality personalized recommendations with privacy protection.
Design/methodology/approach
FTL-NGCF uses a third-party server to coordinate local users to train the graph neural networks (GNN) model. Each user client integrates user–item interactions into the embedding and uploads the model parameters to a server. To prevent attacks during communication and thus promote privacy preservation, the authors introduce homomorphic encryption to ensure secure model aggregation between clients and the server.
Findings
Experiments on three real data sets (Gowalla, Yelp2018, Amazon-Book) show that FTL-NGCF improves the recommendation performance in terms of recall and NDCG, based on the increased consideration of privacy protection relative to original federated learning methods.
Originality/value
To the best of the authors’ knowledge, no previous research has considered federated transfer learning framework for GNN-based recommendation. It can be extended to other recommended applications while maintaining privacy protection.
Details
Keywords
Loukas Tsironis, Nikos Bilalis and Vassilis Moustakis
To demonstrate the applicability of machine‐learning tools in quality management.
Abstract
Purpose
To demonstrate the applicability of machine‐learning tools in quality management.
Design/methodology/approach
Two popular machine‐learning approaches, decision tree induction and association rules mining, were applied on a set of 960 production case records. The accuracy of results was investigated using randomized experimentation and comprehensibility of rules was assessed by experts in the field.
Findings
Both machine‐learning approaches exhibited very good accuracy of results (average error was about 9 percent); however, association rules mining outperformed decision tree induction in comprehensibility and correctness of learned rules.
Research limitations/implications
The proposed methodology is limited with respect to case representation. Production cases are described via attribute‐value sets and the relation between attribute values cannot be determined by the selected machine‐learning methods.
Practical implications
Results demonstrate that machine‐learning techniques may be effectively used to enhance quality management procedures and modeling of cause‐effect relationships, associated with faulty products.
Originality/value
The article proposes a general methodology on how to use machine‐learning techniques to support quality management. The application of the technique in ISDN modem manufacturing demonstrates the effectiveness of the proposed general methodology.
Details
Keywords
Budati Anil Kumar, George Ghinea, S.B. Goyal, Krishna Kant Singh and Shayla Islam
Sergio Duban Morales Dussan, Mauricio Leon, Olmer Garcia-Bedoya and Ixent Galpin
This study aims to explore the digital divide between students living in metropolitan and non-metropolitan areas in the Antioquia region of Colombia. This is achieved by…
Abstract
Purpose
This study aims to explore the digital divide between students living in metropolitan and non-metropolitan areas in the Antioquia region of Colombia. This is achieved by collecting data about student interactions from the Moodle learning management system (LMS), and subsequently applying supervised machine learning models to infer the gap between students in metropolitan and non-metropolitan areas.
Design/methodology/approach
This work uses the well-established Cross-Industry Standard Process for Data Mining methodology, which comprises six phases, viz., problem understanding, data understanding, data preparation, modelling, evaluation and implementation. In this case, student data was collected from the Moodle platform from the Antioquia campus of the UNAD distance learning university.
Findings
The digital divide is evident in the classification model when observing differences in variables such as the number of accesses to the LMS, the total time spent and the number of distinct IP addresses used, as well as the number of system modification events.
Originality/value
This study provides conclusions regarding the problems students in virtual education may face as a result of the digital divide in Colombia which have become increasingly visible since the implementation of machine learning methodologies on LMS such as Moodle. However, these practices may be replicated in any virtual educational context and furthermore be extended to enable personalisation of various aspects of the Moodle platform to meet the individual needs of students.
Details
Keywords
Jagroop Kaur and Jaswinder Singh
Normalization is an important step in all the natural language processing applications that are handling social media text. The text from social media poses a different kind of…
Abstract
Purpose
Normalization is an important step in all the natural language processing applications that are handling social media text. The text from social media poses a different kind of problems that are not present in regular text. Recently, a considerable amount of work has been done in this direction, but mostly in the English language. People who do not speak English code mixed the text with their native language and posted text on social media using the Roman script. This kind of text further aggravates the problem of normalizing. This paper aims to discuss the concept of normalization with respect to code-mixed social media text, and a model has been proposed to normalize such text.
Design/methodology/approach
The system is divided into two phases – candidate generation and most probable sentence selection. Candidate generation task is treated as machine translation task where the Roman text is treated as source language and Gurmukhi text is treated as the target language. Character-based translation system has been proposed to generate candidate tokens. Once candidates are generated, the second phase uses the beam search method for selecting the most probable sentence based on hidden Markov model.
Findings
Character error rate (CER) and bilingual evaluation understudy (BLEU) score are reported. The proposed system has been compared with Akhar software and RB\_R2G system, which are also capable of transliterating Roman text to Gurmukhi. The performance of the system outperforms Akhar software. The CER and BLEU scores are 0.268121 and 0.6807939, respectively, for ill-formed text.
Research limitations/implications
It was observed that the system produces dialectical variations of a word or the word with minor errors like diacritic missing. Spell checker can improve the output of the system by correcting these minor errors. Extensive experimentation is needed for optimizing language identifier, which will further help in improving the output. The language model also seeks further exploration. Inclusion of wider context, particularly from social media text, is an important area that deserves further investigation.
Practical implications
The practical implications of this study are: (1) development of parallel dataset containing Roman and Gurmukhi text; (2) development of dataset annotated with language tag; (3) development of the normalizing system, which is first of its kind and proposes translation based solution for normalizing noisy social media text from Roman to Gurmukhi. It can be extended for any pair of scripts. (4) The proposed system can be used for better analysis of social media text. Theoretically, our study helps in better understanding of text normalization in social media context and opens the doors for further research in multilingual social media text normalization.
Originality/value
Existing research work focus on normalizing monolingual text. This study contributes towards the development of a normalization system for multilingual text.
Details
Keywords
Haixiao Dai, Phong Lam Nguyen and Cat Kutay
Digital learning systems are crucial for education and data collected can analyse students learning performances to improve support. The purpose of this study is to design and…
Abstract
Purpose
Digital learning systems are crucial for education and data collected can analyse students learning performances to improve support. The purpose of this study is to design and build an asynchronous hardware and software system that can store data on a local device until able to share. It was developed for staff and students at university who are using the limited internet access in areas such as remote Northern Territory. This system can asynchronously link the users’ devices and the central server at the university using unstable internet.
Design/methodology/approach
A Learning Box has been build based on minicomputer and a web learning management system (LMS). This study presents different options to create such a system and discusses various approaches for data syncing. The structure of the final setup is a Moodle (Modular Object Oriented Developmental Learning Environment) LMS on a Raspberry Pi which provides a Wi-Fi hotspot. The authors worked with lecturers from X University who work in remote Northern Territory regions to test this and provide feedback. This study also considered suitable data collection and techniques that can be used to analyse the available data to support learning analysis by the staff. This research focuses on building an asynchronous hardware and software system that can store data on a local device until able to share. It was developed for staff and students at university who are using the limited internet access in areas such as remote Northern Territory. This system can asynchronously link the users’ devices and the central server at the university using unstable internet. Digital learning systems are crucial for education, and data collected can analyse students learning performances to improve support.
Findings
The resultant system has been tested in various scenarios to ensure it is robust when students’ submissions are collected. Furthermore, issues around student familiarity and ability to use online systems have been considered due to early feedback.
Research limitations/implications
Monitoring asynchronous collaborative learning systems through analytics can assist students learning in their own time. Learning Hubs can be easily set up and maintained using micro-computers now easily available. A phone interface is sufficient for learning when video and audio submissions are supported in the LMS.
Practical implications
This study shows digital learning can be implemented in an offline environment by using a Raspberry Pi as LMS server. Offline collaborative learning in remote communities can be achieved by applying asynchronized data syncing techniques. Also asynchronized data syncing can be reliably achieved by using change logs and incremental syncing technique.
Social implications
Focus on audio and video submission allows engagement in higher education by students with lower literacy but higher practice skills. Curriculum that clearly supports the level of learning required for a job needs to be developed, and the assumption that literacy is part of the skilled job in the workplace needs to be removed.
Originality/value
To the best of the authors’ knowledge, this is the first remote asynchronous collaborative LMS environment that has been implemented. This provides the hardware and software for opportunities to share learning remotely. Material to support low literacy students is also included.
Details
Keywords
Vaclav Snasel, Tran Khanh Dang, Josef Kueng and Lingping Kong
This paper aims to review in-memory computing (IMC) for machine learning (ML) applications from history, architectures and options aspects. In this review, the authors investigate…
Abstract
Purpose
This paper aims to review in-memory computing (IMC) for machine learning (ML) applications from history, architectures and options aspects. In this review, the authors investigate different architectural aspects and collect and provide our comparative evaluations.
Design/methodology/approach
Collecting over 40 IMC papers related to hardware design and optimization techniques of recent years, then classify them into three optimization option categories: optimization through graphic processing unit (GPU), optimization through reduced precision and optimization through hardware accelerator. Then, the authors brief those techniques in aspects such as what kind of data set it applied, how it is designed and what is the contribution of this design.
Findings
ML algorithms are potent tools accommodated on IMC architecture. Although general-purpose hardware (central processing units and GPUs) can supply explicit solutions, their energy efficiencies have limitations because of their excessive flexibility support. On the other hand, hardware accelerators (field programmable gate arrays and application-specific integrated circuits) win on the energy efficiency aspect, but individual accelerator often adapts exclusively to ax single ML approach (family). From a long hardware evolution perspective, hardware/software collaboration heterogeneity design from hybrid platforms is an option for the researcher.
Originality/value
IMC’s optimization enables high-speed processing, increases performance and analyzes massive volumes of data in real-time. This work reviews IMC and its evolution. Then, the authors categorize three optimization paths for the IMC architecture to improve performance metrics.
Details