Search results

1 – 10 of over 1000
Article
Publication date: 25 July 2022

Sravanthi Chutke, Nandhitha N.M. and Praveen Kumar Lendale

With the advent of technology, a huge amount of data is being transmitted and received through the internet. Large bandwidth and storage are required for the exchange of data and…

Abstract

Purpose

With the advent of technology, a huge amount of data is being transmitted and received through the internet. Large bandwidth and storage are required for the exchange of data and storage, respectively. Hence, compression of the data which is to be transmitted over the channel is unavoidable. The main purpose of the proposed system is to use the bandwidth effectively. The videos are compressed at the transmitter’s end and reconstructed at the receiver’s end. Compression techniques even help for smaller storage requirements.

Design/methodology/approach

The paper proposes a novel compression technique for three-dimensional (3D) videos using a zig-zag 3D discrete cosine transform. The method operates a 3D discrete cosine transform on the videos, followed by a zig-zag scanning process. Finally, to convert the data into a single bit stream for transmission, a run-length encoding technique is used. The videos are reconstructed by using the inverse 3D discrete cosine transform, inverse zig-zag scanning (quantization) and inverse run length coding techniques. The proposed method is simple and reduces the complexity of the convolutional techniques.

Findings

Coding reduction, code word reduction, peak signal to noise ratio (PSNR), mean square error, compression percent and compression ratio values are calculated, and the dominance of the proposed method over the convolutional methods is seen.

Originality/value

With zig-zag quantization and run length encoding using 3D discrete cosine transform for 3D video compression, gives compression up to 90% with a PSNR of 41.98 dB. The proposed method can be used in multimedia applications where bandwidth, storage and data expenses are the major issues.

Details

International Journal of Pervasive Computing and Communications, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 1742-7371

Keywords

Open Access
Article
Publication date: 29 July 2020

Mahmood Al-khassaweneh and Omar AlShorman

In the big data era, image compression is of significant importance in today’s world. Importantly, compression of large sized images is required for everyday tasks; including…

Abstract

In the big data era, image compression is of significant importance in today’s world. Importantly, compression of large sized images is required for everyday tasks; including electronic data communications and internet transactions. However, two important measures should be considered for any compression algorithm: the compression factor and the quality of the decompressed image. In this paper, we use Frei-Chen bases technique and the Modified Run Length Encoding (RLE) to compress images. The Frei-Chen bases technique is applied at the first stage in which the average subspace is applied to each 3 × 3 block. Those blocks with the highest energy are replaced by a single value that represents the average value of the pixels in the corresponding block. Even though Frei-Chen bases technique provides lossy compression, it maintains the main characteristics of the image. Additionally, the Frei-Chen bases technique enhances the compression factor, making it advantageous to use. In the second stage, RLE is applied to further increase the compression factor. The goal of using RLE is to enhance the compression factor without adding any distortion to the resultant decompressed image. Integrating RLE with Frei-Chen bases technique, as described in the proposed algorithm, ensures high quality decompressed images and high compression rate. The results of the proposed algorithms are shown to be comparable in quality and performance with other existing methods.

Details

Applied Computing and Informatics, vol. 20 no. 1/2
Type: Research Article
ISSN: 2634-1964

Keywords

Article
Publication date: 30 August 2013

Vanessa El‐Khoury, Martin Jergler, Getnet Abebe Bayou, David Coquil and Harald Kosch

A fine‐grained video content indexing, retrieval, and adaptation requires accurate metadata describing the video structure and semantics to the lowest granularity, i.e. to the…

Abstract

Purpose

A fine‐grained video content indexing, retrieval, and adaptation requires accurate metadata describing the video structure and semantics to the lowest granularity, i.e. to the object level. The authors address these requirements by proposing semantic video content annotation tool (SVCAT) for structural and high‐level semantic video annotation. SVCAT is a semi‐automatic MPEG‐7 standard compliant annotation tool, which produces metadata according to a new object‐based video content model introduced in this work. Videos are temporally segmented into shots and shots level concepts are detected automatically using ImageNet as background knowledge. These concepts are used as a guide to easily locate and select objects of interest which are then tracked automatically to generate an object level metadata. The integration of shot based concept detection with object localization and tracking drastically alleviates the task of an annotator. The paper aims to discuss these issues.

Design/methodology/approach

A systematic keyframes classification into ImageNet categories is used as the basis for automatic concept detection in temporal units. This is then followed by an object tracking algorithm to get exact spatial information about objects.

Findings

Experimental results showed that SVCAT is able to provide accurate object level video metadata.

Originality/value

The new contribution in this paper introduces an approach of using ImageNet to get shot level annotations automatically. This approach assists video annotators significantly by minimizing the effort required to locate salient objects in the video.

Details

International Journal of Pervasive Computing and Communications, vol. 9 no. 3
Type: Research Article
ISSN: 1742-7371

Keywords

Article
Publication date: 1 July 2006

Scott Piepenburg

An introduction to disc‐based audio‐video technologies

894

Abstract

Purpose

An introduction to disc‐based audio‐video technologies

Design/methodology/approach

Description of the basic disc‐based audio‐video technologies.

Findings

Provides the baseline for types of disc‐based audio‐video technology such as holographic video disc (HVD), BluRay, HD DVD, DVD‐Audio and others.

Originality/value

This paper is useful for information management professionals who seek greater understanding of the basics disc storage medium.

Details

Library Hi Tech News, vol. 23 no. 6
Type: Research Article
ISSN: 0741-9058

Keywords

Article
Publication date: 1 September 1999

Siriginidi Subba Rao

Highlights the role of facsimile in inter‐office communication, the development of fax technology and its current popularity over e‐mail. Presents in detail factors to be…

1220

Abstract

Highlights the role of facsimile in inter‐office communication, the development of fax technology and its current popularity over e‐mail. Presents in detail factors to be considered when choosing a fax. Discusses the existing standards for various groups of machines, the features most in demand from users and outlines the particular market context in India – particularly the entrance of fax into the SOHO market after its complete acceptance in the enterprise office.

Details

Work Study, vol. 48 no. 5
Type: Research Article
ISSN: 0043-8022

Keywords

Article
Publication date: 3 November 2014

Dimitris N. Kanellopoulos

The purpose of this paper is to provide a tutorial and survey on recent advances in multimedia networking from an integrated perspective of both video networking and building…

1437

Abstract

Purpose

The purpose of this paper is to provide a tutorial and survey on recent advances in multimedia networking from an integrated perspective of both video networking and building digital video libraries. The nature of video networking, coupled with various recent developments in standards, proposals and applications, poses great challenges to the research and industrial communities working in this area.

Design/methodology/approach

This paper presents an insightful analysis for recent and emerging multimedia applications in digital video libraries and on video coding standards and their applications in digital libraries. Emphasis is given on those standards and mechanisms that enable multimedia content adaptation fully interoperable according to the vision of Universal Multimedia Access vision.

Findings

The tutorial helps elucidate the similarities and differences among the considered standards and networking applications. A number of research trends and challenges are identified, and selected promising solutions are discussed. This practice would needle further thoughts on the development of this area and open-up more research and application opportunities.

Research limitations/implications

The paper does not provide methodical studies of networking application scenarios for all the discussed video coding standards and Quality of Service (QoS) management mechanisms.

Practical implications

The paper provides an overview of which technologies/mechanisms are being used broadly in networking scenarios of digital video libraries. The discussed networking scenarios bring together video coding standards and various emerging wireless networking paradigms toward innovative application scenarios.

Originality/value

QoS mechanisms and video coding standards that support multimedia applications for digital video libraries need to become well-known by library managers and professional associations in the fields of libraries and archives. The comprehensive overview and critiques on existing standards and application approaches offer a valuable reference for researchers and system developers in related research and industrial communities.

Details

The Electronic Library, vol. 32 no. 6
Type: Research Article
ISSN: 0264-0473

Keywords

Article
Publication date: 6 November 2017

Chaw Thet Zan and Hayato Yamana

The paper aims to estimate the segment size and alphabet size of Symbolic Aggregate approXimation (SAX). In SAX, time series data are divided into a set of equal-sized segments…

300

Abstract

Purpose

The paper aims to estimate the segment size and alphabet size of Symbolic Aggregate approXimation (SAX). In SAX, time series data are divided into a set of equal-sized segments. Each segment is represented by its mean value and mapped with an alphabet, where the number of adopted symbols is called alphabet size. Both parameters control data compression ratio and accuracy of time series mining tasks. Besides, optimal parameters selection highly depends on different application and data sets. In fact, these parameters are iteratively selected by analyzing entire data sets, which limits handling of the huge amount of time series and reduces the applicability of SAX.

Design/methodology/approach

The segment size is estimated based on Shannon sampling theorem (autoSAXSD_S) and adaptive hierarchical segmentation (autoSAXSD_M). As for the alphabet size, it is focused on how mean values of all the segments are distributed. The small number of alphabet size is set for large distribution to easily distinguish the difference among segments.

Findings

Experimental evaluation using University of California Riverside (UCR) data sets shows that the proposed schemes are able to select the parameters well with high classification accuracy and show comparable efficiency in comparison with state-of-the-art methods, SAX and auto_iSAX.

Originality/value

The originality of this paper is the way to find out the optimal parameters of SAX using the proposed estimation schemes. The first parameter segment size is automatically estimated on two approaches and the second parameter alphabet size is estimated on the most frequent average (mean) value among segments.

Details

International Journal of Web Information Systems, vol. 13 no. 4
Type: Research Article
ISSN: 1744-0084

Keywords

Article
Publication date: 1 January 1989

Clyde W. Grotophorst

Optical character recognition (OCR) technology can be employed to produce an ASCII‐text database for mounting on computer systems. Current technologies and principles of scanning…

Abstract

Optical character recognition (OCR) technology can be employed to produce an ASCII‐text database for mounting on computer systems. Current technologies and principles of scanning and OCR are discussed. A prototypical “local” project—the creation of a full‐text database of dissertations done at George Mason University—has been undertaken by the Fenwick Library at that institution. Problems encountered with current scanning and OCR technologies are illustrated and discussed, as well as techniques and “filter” programs developed to streamline the scanning and OCR conversion process.

Details

Library Hi Tech, vol. 7 no. 1
Type: Research Article
ISSN: 0737-8831

Article
Publication date: 14 August 2017

Pavlos Delias

The purpose of this paper is to manifest a method that exploits process analytics to discover critical knowledge for a business process. This knowledge eventually answers to the…

Abstract

Purpose

The purpose of this paper is to manifest a method that exploits process analytics to discover critical knowledge for a business process. This knowledge eventually answers to the question if process behavior can suggest which activities should be outsourced to get the performance improved.

Design/methodology/approach

The author linked waste sources to process behavioral patterns, and adopted the positive deviance paradigm to highlight compelling behaviors. Various analytic tools (generalized regression, clustering, etc.) were used to provide recommendations.

Findings

By outsourcing small parts of the process, significant process improvement is expected. Evidence-based process analytics can effectively support the relevant decisions.

Research limitations/implications

The author had no access to the relevant policy makers (process owners).

Originality/value

The author proposed an operationalization of concepts that connects process behavior to waste sources. The author presented the use of positive deviance as a guide for waste elimination projects.

Details

Industrial Management & Data Systems, vol. 117 no. 7
Type: Research Article
ISSN: 0263-5577

Keywords

Article
Publication date: 1 August 1997

A. Macfarlane, S.E. Robertson and J.A. Mccann

The progress of parallel computing in Information Retrieval (IR) is reviewed. In particular we stress the importance of the motivation in using parallel computing for text…

Abstract

The progress of parallel computing in Information Retrieval (IR) is reviewed. In particular we stress the importance of the motivation in using parallel computing for text retrieval. We analyse parallel IR systems using a classification defined by Rasmussen and describe some parallel IR systems. We give a description of the retrieval models used in parallel information processing. We describe areas of research which we believe are needed.

Details

Journal of Documentation, vol. 53 no. 3
Type: Research Article
ISSN: 0022-0418

Keywords

1 – 10 of over 1000