Search results

1 – 10 of over 40000
Article
Publication date: 8 February 2016

Zhihua Li, Zianfei Tang and Yihua Yang

The high-efficient processing of mass data is a primary issue in building and maintaining security video surveillance system. This paper aims to focus on the architecture of…

Abstract

Purpose

The high-efficient processing of mass data is a primary issue in building and maintaining security video surveillance system. This paper aims to focus on the architecture of security video surveillance system, which was based on Hadoop parallel processing technology in big data environment.

Design/methodology/approach

A hardware framework of security video surveillance network cascaded system (SVSNCS) was constructed on the basis of Internet of Things, network cascade technology and Hadoop platform. Then, the architecture model of SVSNCS was proposed using the Hadoop and big data processing platform.

Findings

Finally, we suggested the procedure of video processing according to the cascade network characteristics.

Originality/value

Our paper, which focused on the architecture of security video surveillance system in big data environment on the basis of Hadoop parallel processing technology, provided high-quality video surveillance services for security area.

Details

World Journal of Engineering, vol. 13 no. 1
Type: Research Article
ISSN: 1708-5284

Keywords

Article
Publication date: 7 August 2017

Saravanan Devaraj

Data mining is the process of detecting knowledge from a given huge data set. Among the data set, multimedia is the data which contains diverse data such as audio, video, image…

Abstract

Purpose

Data mining is the process of detecting knowledge from a given huge data set. Among the data set, multimedia is the data which contains diverse data such as audio, video, image, text and motion. In this growing field of video data, mining the video data plays vital role in the field of video data mining. In video data mining, video data are grouped into frames. In this vast amount of video frames, the fast retrieval of needed information is important one. This paper aims to propose a Birch-based clustering method for content-based image retrieval.

Design/methodology/approach

In image retrieval system, image segmentation plays a very important role. A text file, normally, is divided into sections, that is, piece, sentences, word and character for this information which are organized and indexed effectively like in a video, the information is dynamic in nature and this information is converted to static for easy retrieval. For this, video files are divided into a number of frames or segments. After the segmentation process, images are trained for retrieval process, and from these, unwanted images are removed from the data set. The noise or unwanted image removal pseudo-code is shown below. In the code image, pixel value represents the value of the difference between the two adjacent image pixel values. By assuming a threshold for the image value, the duplicate images are found. After finding the duplicate image, it is removed from the data set. Clustering is used in many applications as a stand-alone tool to get insight into data distribution and as a pre-processing step for other algorithms (Ester et al., 1996). Specifically, it is used in pattern recognition, spatial data analysis, image processing, economic science document classification, etc. Hierarchical clustering algorithms are classified as agglomerative or divisive. BRICH uses clustering attribute (CA) and clustering feature hierarchy (CA_Hierarchy) for the formation of clusters. It perform multidimensional data objects. Every BRICH algorithm based on the memory-oriented information, that is, memory constrains, is involved in the processing of the data sets. This information is represented in Figures 6-10. For forming clusters, they use the amount of object in the cluster (A), the sum of all points in the data set (S) and need the square value of the all objects (P).

Findings

The proposed technique brings an effective result for cluster formation.

Originality/value

BRICH uses a novel approach to model the degree of inter-connectivity and closeness between each pair of clusters that takes into account the internal characteristics of the clusters themselves.

Details

World Journal of Engineering, vol. 14 no. 4
Type: Research Article
ISSN: 1708-5284

Keywords

Article
Publication date: 8 September 2023

Xiancheng Ou, Yuting Chen, Siwei Zhou and Jiandong Shi

With the continuous growth of online education, the quality issue of online educational videos has become increasingly prominent, causing students in online learning to face the…

Abstract

Purpose

With the continuous growth of online education, the quality issue of online educational videos has become increasingly prominent, causing students in online learning to face the dilemma of knowledge confusion. The existing mechanisms for controlling the quality of online educational videos suffer from subjectivity and low timeliness. Monitoring the quality of online educational videos involves analyzing metadata features and log data, which is an important aspect. With the development of artificial intelligence technology, deep learning techniques with strong predictive capabilities can provide new methods for predicting the quality of online educational videos, effectively overcoming the shortcomings of existing methods. The purpose of this study is to find a deep neural network that can model the dynamic and static features of the video itself, as well as the relationships between videos, to achieve dynamic monitoring of the quality of online educational videos.

Design/methodology/approach

The quality of a video cannot be directly measured. According to previous research, the authors use engagement to represent the level of video quality. Engagement is the normalized participation time, which represents the degree to which learners tend to participate in the video. Based on existing public data sets, this study designs an online educational video engagement prediction model based on dynamic graph neural networks (DGNNs). The model is trained based on the video’s static features and dynamic features generated after its release by constructing dynamic graph data. The model includes a spatiotemporal feature extraction layer composed of DGNNs, which can effectively extract the time and space features contained in the video's dynamic graph data. The trained model is used to predict the engagement level of learners with the video on day T after its release, thereby achieving dynamic monitoring of video quality.

Findings

Models with spatiotemporal feature extraction layers consisting of four types of DGNNs can accurately predict the engagement level of online educational videos. Of these, the model using the temporal graph convolutional neural network has the smallest prediction error. In dynamic graph construction, using cosine similarity and Euclidean distance functions with reasonable threshold settings can construct a structurally appropriate dynamic graph. In the training of this model, the amount of historical time series data used will affect the model’s predictive performance. The more historical time series data used, the smaller the prediction error of the trained model.

Research limitations/implications

A limitation of this study is that not all video data in the data set was used to construct the dynamic graph due to memory constraints. In addition, the DGNNs used in the spatiotemporal feature extraction layer are relatively conventional.

Originality/value

In this study, the authors propose an online educational video engagement prediction model based on DGNNs, which can achieve the dynamic monitoring of video quality. The model can be applied as part of a video quality monitoring mechanism for various online educational resource platforms.

Details

International Journal of Web Information Systems, vol. 19 no. 5/6
Type: Research Article
ISSN: 1744-0084

Keywords

Article
Publication date: 5 June 2017

Eugene Yujun Fu, Hong Va Leong, Grace Ngai and Stephen C.F. Chan

Social signal processing under affective computing aims at recognizing and extracting useful human social interaction patterns. Fight is a common social interaction in real life…

Abstract

Purpose

Social signal processing under affective computing aims at recognizing and extracting useful human social interaction patterns. Fight is a common social interaction in real life. A fight detection system finds wide applications. This paper aims to detect fights in a natural and low-cost manner.

Design/methodology/approach

Research works on fight detection are often based on visual features, demanding substantive computation and good video quality. In this paper, the authors propose an approach to detect fight events through motion analysis. Most existing works evaluated their algorithms on public data sets manifesting simulated fights, where the fights are acted out by actors. To evaluate real fights, the authors collected videos involving real fights to form a data set. Based on the two types of data sets, the authors evaluated the performance of their motion signal analysis algorithm, which was then compared with the state-of-the-art approach based on MoSIFT descriptors with Bag-of-Words mechanism, and basic motion signal analysis with Bag-of-Words.

Findings

The experimental results indicate that the proposed approach accurately detects fights in real scenarios and performs better than the MoSIFT approach.

Originality/value

By collecting and annotating real surveillance videos containing real fight events and augmenting with well-known data sets, the authors proposed, implemented and evaluated a low computation approach, comparing it with the state-of-the-art approach. The authors uncovered some fundamental differences between real and simulated fights and initiated a new study in discriminating real against simulated fight events, with very good performance.

Details

International Journal of Pervasive Computing and Communications, vol. 13 no. 2
Type: Research Article
ISSN: 1742-7371

Keywords

Article
Publication date: 2 February 2024

Koraya Techawongstien

The Thai video game domain has witnessed substantial growth in recent years. However, many games enjoyed by Thai players are in foreign languages, with only a handful of titles…

Abstract

Purpose

The Thai video game domain has witnessed substantial growth in recent years. However, many games enjoyed by Thai players are in foreign languages, with only a handful of titles translated/localized into the Thai locale. Some Thai video game enthusiasts have taken on the role of unofficial translators/localizers, contributing to a localization domain that accommodates both official and unofficial translation/localization efforts. This general review paper aims to outline the author's experiences in collecting data within the domain of video game translation/localization in Thailand.

Design/methodology/approach

Using a descriptive approach, this general review paper employs the netnography method. It sheds light on the complexities of video game translation/localization in Thailand and incorporates semi-structured interviews with a snowball sampling technique for the selection of participants and in-game data collection methods.

Findings

The netnography method has proved instrumental in navigating the intricacies of this evolving landscape. Adopting the netnography method for data collection in this research contributes to establishing more robust connections with the research sites. “Inside” professionals and individuals play a significant role in data gathering by recommending additional sources of information for the research.

Originality/value

While netnography is conventionally applied in the market and consumer research, this paper demonstrates its efficacy in unraveling the dynamics of video game translation/localization in Thailand.

Details

Qualitative Research Journal, vol. 24 no. 2
Type: Research Article
ISSN: 1443-9883

Keywords

Article
Publication date: 6 June 2023

Ha Nguyen and Prasina Parameswaran

The goal of this study is to explore how content creators engage in critical data literacies on TikTok, a social media site that encourages the creation and dissemination of…

Abstract

Purpose

The goal of this study is to explore how content creators engage in critical data literacies on TikTok, a social media site that encourages the creation and dissemination of user-created, short-form videos. Critical data literacies encompass the ability to reason with, critique, control, and repurpose data for creative uses. Existing work on critical data literacies on social media has focused on understanding of personal data, critique of data use, and strategies to protect privacy. This work focuses on how TikTok content creators repurpose data to construct their own narratives.

Design/methodology/approach

Through hashtag search, the authors created a corpus of 410 TikTok videos focused on discussing environmental and climate action, and qualitatively coded the videos for data literacies practices and video features (audio, footage, background images) that may support these practices.

Findings

Content creators engaged in multiple practices to attach meanings to data and situate environmental and climate action discourse in lived experiences. While there were instances of no data practices, we found cases where creators compiled different data sources, situated data in personal and local contexts, and positioned their experiences as data points to supplement or counter other statistics. Creators further leveraged the platform’s technical features, particularly the ability to add original audio and background images, to add narratives to the collective discourse.

Originality/value

This study presents a unique focus on examining critical data literacies on social media. Findings highlight how content creators repurpose data and integrate personal experiences. They illustrate platform features to support data practices and inform the design of learning environments.

Details

Information and Learning Sciences, vol. 124 no. 5/6
Type: Research Article
ISSN: 2398-5348

Keywords

Article
Publication date: 5 July 2021

Xuhui Li, Liuyan Liu, Xiaoguang Wang, Yiwen Li, Qingfeng Wu and Tieyun Qian

The purpose of this paper is to propose a graph-based representation approach for evolutionary knowledge under the big data circumstance, aiming to gradually build conceptual…

Abstract

Purpose

The purpose of this paper is to propose a graph-based representation approach for evolutionary knowledge under the big data circumstance, aiming to gradually build conceptual models from data.

Design/methodology/approach

A semantic data model named meaning graph (MGraph) is introduced to represent knowledge concepts to organize the knowledge instances in a graph-based knowledge base. MGraph uses directed acyclic graph–like types as concept schemas to specify the structural features of knowledge with intention variety. It also proposes several specialization mechanisms to enable knowledge evolution. Based on MGraph, a paradigm is introduced to model the evolutionary concept schemas, and a scenario on video semantics modeling is introduced in detail.

Findings

MGraph is fit for the evolution features of representing knowledge from big data and lays the foundation for building a knowledge base under the big data circumstance.

Originality/value

The representation approach based on MGraph can effectively and coherently address the major issues of evolutionary knowledge from big data. The new approach is promising in building a big knowledge base.

Details

The Electronic Library , vol. 39 no. 3
Type: Research Article
ISSN: 0264-0473

Keywords

Article
Publication date: 19 August 2021

Oussama BenRhouma, Ali AlZahrani, Ahmad AlKhodre, Abdallah Namoun and Wasim Ahmad Bhat

The purpose of this paper is to investigate the private-data pertaining to the interaction of users with social media applications that can be recovered from second-hand Android…

Abstract

Purpose

The purpose of this paper is to investigate the private-data pertaining to the interaction of users with social media applications that can be recovered from second-hand Android devices.

Design/methodology/approach

This study uses a black-box testing-principles based methodology to develop use-cases that simulate real-world case-scenarios of the activities performed by the users on the social media application. The authors executed these use-cases in a controlled experiment and examined the Android smartphone to recover the private-data pertaining to these use-cases.

Findings

The results suggest that the social media data recovered from Android devices can reveal a complete timeline of activities performed by the user, identify all the videos watched, uploaded, shared and deleted by the user, disclose the username and user-id of the user, unveil the email addresses used by the user to download the application and share the videos with other users and expose the social network of the user on the platform. Forensic investigators may find this data helpful in investigating crimes such as cyber bullying, racism, blasphemy, vehicle thefts, road accidents and so on. However, this data-breach in Android devices is a threat to user's privacy, identity and profiling in second-hand market.

Practical implications

Perceived notion of data sanitisation as a result of application removal and factory-reset can have serious implications. Though being helpful to forensic investigators, it leaves the user vulnerable to privacy breach, identity theft, profiling and social network revealing in second-hand market. At the same time, users' sensitivity towards data-breach might compel users to refrain from selling their Android devices in second-hand market and hamper device recycling.

Originality/value

This study attempts to bridge the literature gap in social media data-breach in second-hand Android devices by experimentally determining the extent of the breach. The findings of this study can help digital forensic investigators in solving crimes such as vehicle theft, road accidents, cybercrimes and so on. It can assist smartphone users to decide whether to sell their smartphones in a second-hand market, and at the same time encourage developers and researchers to design methods of social media data sanitisation.

Details

Information & Computer Security, vol. 30 no. 1
Type: Research Article
ISSN: 2056-4961

Keywords

Article
Publication date: 1 February 2006

H. Kabir, Gholamali C. Shoja and Eric G. Manning

Streaming audio/video contents over the Internet requires large network bandwidth and timely delivery of media data. A streaming session is generally long and also needs a large…

Abstract

Streaming audio/video contents over the Internet requires large network bandwidth and timely delivery of media data. A streaming session is generally long and also needs a large I/O bandwidth at the streaming server. A streaming server, however, has limited network and I/O bandwidth. For this reason, a streaming server alone cannot scale a streaming service well. An entire audio/video media file often cannot be cached due to intellectual property right concerns of the content owners, security reasons, and also due to its large size. This makes a streaming service hard to scale using conventional proxy servers. Media file compression using variable‐bit‐rate (VBR) encoding is necessary to get constant quality video playback although it produces traffic bursts. Traffic bursts either waste network bandwidth or cause hiccups in the playback. Large network latency and jitter also cause long start‐up delay and unwanted pauses in the playback, respectively. In this paper, we propose a proxy based constant‐bit‐rate (CBR)‐transmission scheme for VBR‐encoded videos and a scalable streaming scheme that uses a CBRtransmission scheme to stream stored videos over the Internet. Our CBR‐streaming scheme allows a server to transmit a VBRencoded video at a constant bit rate, close to its mean encoding bit rate, and deals with the network latency and jitter issues efficiently in order to provide quick and hiccup free playback without caching an entire media file. Our scalable streaming scheme also allows many clients to share a server stream. We use prefix buffers at the proxy to cache the prefixes of popular videos, to minimize the start‐up delay and to enable near mean bit rate streaming from the server as well as from the proxy. We use smoothing buffers at the proxy not only to eliminate jitter and traffic burst effects but also to enable many clients to share the same server stream. We present simulation results to demonstrate the effectiveness of our streaming scheme.

Details

Interactive Technology and Smart Education, vol. 3 no. 1
Type: Research Article
ISSN: 1741-5659

Keywords

Article
Publication date: 9 June 2020

Jihyun Kim, Kara Suzuka and Elizabeth Yakel

This research investigated the reuse of Video Records of Practice (VRPs) – i.e. a type of qualitative data documenting teaching and learning in educational settings. It studied…

Abstract

Purpose

This research investigated the reuse of Video Records of Practice (VRPs) – i.e. a type of qualitative data documenting teaching and learning in educational settings. It studied how reusers' purposes and experience-level with VRP reuse influence the importance of various VRP selection criteria and how these differ depending on whether the main goal for reuse was research or teaching. It also examined whether two different dimensions of qualitative research – reflexivity and context – were factors in VRP reuse.

Design/methodology/approach

The study reports on surveys of reusers at four VRP repositories. Questions were based on the literature and interviews with VRP reusers. The response rate was 20.6% (180 of 872 distributed surveys). This paper focused on 126 respondents who affirmatively responded they reused VRPs from a repository.

Findings

Researchers using VRPs were primarily interested in examining a broad range of processes in education and studying/improving ways to measure differences and growth in education. Reusers with teaching goals were commonly interested in VRPs to engage learners in showing examples/exemplars of – and reflecting on – teaching and learning. These differences between research and teaching led to varied expectations about VRPs, such as the amount of content needed and necessary contextual information to support reuse.

Research limitations/implications

While repositories focus on exposing content, understanding and communicating certain qualities of that content can help reusers identify VRPs and align goals with selection decisions.

Originality/value

Although qualitative data are increasingly reused, research has rarely focused on identifying how qualitative data reusers employ selection criteria. This study focused on VRPs as one type of qualitative data and identified the attributes of VRPs that reusers perceived to be important during selection. These will help VRP repositories determine which metadata and documentation meet reusers' goals.

Details

Aslib Journal of Information Management, vol. 72 no. 3
Type: Research Article
ISSN: 2050-3806

Keywords

1 – 10 of over 40000