Search results

1 – 10 of 313
Article
Publication date: 9 August 2011

Min Gyo Chung, Taehyung (George) Wang and Phillip C.‐Y. Sheu

Video summarisation is one of the most active fields in content‐based video retrieval research. A new video summarisation scheme is proposed by this paper based on socially…

Abstract

Purpose

Video summarisation is one of the most active fields in content‐based video retrieval research. A new video summarisation scheme is proposed by this paper based on socially generated temporal tags.

Design/methodology/approach

To capture users' collaborative tagging activities the proposed scheme maintains video bookmarks, which contain some temporal or positional information about videos, such as relative time codes or byte offsets. For each video all the video bookmarks collected from users are then statistically analysed in order to extract some meaningful key frames (the video equivalent of keywords), which collectively constitute the summary of the video.

Findings

Compared with traditional video summarisation methods that use low‐level audio‐visual features, the proposed method is based on users' high‐level collaborative activities, and thus can produce semantically more important summaries than existing methods.

Research limitations/implications

It is assumed that the video frames around the bookmarks inserted by users are informative and representative, and therefore can be used as good sources for summarising videos.

Originality/value

Folksonomy, commonly called collaborative tagging, is a Web 2.0 method for users to freely annotate shared information resources with keywords. It has mostly been used for collaboratively tagging photos (Flickr), web site bookmarks (Del.icio.us), or blog posts (Technorati), but has never been applied to the field of automatic video summarisation. It is believed that this is the first attempt to utilise users' high‐level collaborative tagging activities, instead of low‐level audio‐visual features, for video summarisation.

Details

Online Information Review, vol. 35 no. 4
Type: Research Article
ISSN: 1468-4527

Keywords

Article
Publication date: 31 December 2006

Dian Tjondronegoro, Lei Wang and Adrien Joly

Affordable mobile devices with video playback functionality are rapidly growing in the market. Current wireless and third generation communication networks enable smoother and…

Abstract

Affordable mobile devices with video playback functionality are rapidly growing in the market. Current wireless and third generation communication networks enable smoother and higher quality streaming video. With the support of these technologies, most participants in telecom value‐added service chain are planning to shift their business focus to a more profitable and appealing area, mobile TV. Previous work that survey on users' behavior while consuming mobile TV has indicated that users normally watch brief and casual contents, and not the full program. However, most of the current services adopt a “push” approach since users passively receive pre‐defined contents, rather than pulling the interesting topics and segments. In order to promote a more enjoyable and rewarding watching experience, this paper will propose a framework to support a fully interactive mobile TV. The main goal is to enable users to: 1) visually locate interesting topics across multiple genres (such as news, sports and entertainment) and 2) fully control the playback flow of the multimedia items while selecting the most interesting segments. A web‐based system has been developed to implement and test the effectiveness of the proposed framework in a wireless and mobile setting.

Details

International Journal of Web Information Systems, vol. 2 no. 3/4
Type: Research Article
ISSN: 1744-0084

Keywords

Article
Publication date: 4 December 2018

Cliff Loke, Schubert Foo and Shaheen Majid

Keywords search is intuitive, simple to use and convenient. It is also the de facto input interface for textual and multimedia retrieval. However, individuals often perform poorly…

2049

Abstract

Purpose

Keywords search is intuitive, simple to use and convenient. It is also the de facto input interface for textual and multimedia retrieval. However, individuals often perform poorly when faced with exploratory search tasks that are common during learning, resulting in poor quality searches. The purpose of this paper is to examine how adolescent learners search and select videos to support self-learning. The findings allow for the identification of design concepts of video retrieval interface and features that can facilitate better exploratory searches.

Design/methodology/approach

Participants were assigned two customized video search tasks. The think-aloud protocol is used to allow participants to verbalize their actions, thoughts and feeling. This approach offered rich insights to the participants’ cognitive processes and considerations when performing the search tasks.

Findings

This study identified five themes for exploratory video search behavior: selection of internet resources, query formulation/reformulation, selection of the video(s) for preview, getting acquainted with the video content, and making a decision for the search task. The analysis of these themes led to a number of design concepts, ranging from supporting exploration of topics to better interaction with metadata.

Practical implications

The findings can inform future development of dedicated video retrieval systems interfaces that seeks to facilitate effective exploratory searches by learners.

Originality/value

This study contributes by suggesting design concepts for video retrieval system developers to support exploratory video searches.

Details

Aslib Journal of Information Management, vol. 71 no. 4
Type: Research Article
ISSN: 2050-3806

Keywords

Article
Publication date: 21 December 2021

G. Thirumalaiah and S. Immanuel Alex Pandian

The space-time variants algorithm will not give good results in practical scenarios; when no tubes increase, these techniques will not give the results. It is challenging to…

49

Abstract

Purpose

The space-time variants algorithm will not give good results in practical scenarios; when no tubes increase, these techniques will not give the results. It is challenging to reduce the energy of the output synopsis videos. In this paper, a new optimized technique has been implemented that models and covers every frame in the output video.

Design/methodology/approach

In the video synopsis, condensing a video to produce a low frame rate (FR) video using their spatial and temporal coefficients is vital in complex environments. Maintaining a database is also feasible and consumes space. In recent years, many algorithms were proposed.

Findings

The main advantage of this proposed technique is that the output frames are selected by the user definitions and stored in low-intensity communication systems and also it gives tremendous support to the user to select desired tubes and thereby stops the criterion in the output video, which can be further suitable for the user's knowledge and creates nonoverlapping tube-oriented synopsis that can provide excellent visual experience.

Research limitations/implications

In this research paper, four test videos are utilized with complex environments (high-density objects) and show that the proposed technique gives better results when compared to other existing techniques.

Originality/value

The proposed method provides a unique technique in video synopsis for compressing the data without loss.

Details

International Journal of Intelligent Unmanned Systems, vol. 11 no. 1
Type: Research Article
ISSN: 2049-6427

Keywords

Article
Publication date: 20 April 2012

E. Fersini and F. Sartori

The need of tools for content analysis, information extraction and retrieval of multimedia objects in their native form is strongly emphasized into the judicial domain: digital…

Abstract

Purpose

The need of tools for content analysis, information extraction and retrieval of multimedia objects in their native form is strongly emphasized into the judicial domain: digital videos represent a fundamental informative source of events occurring during judicial proceedings that should be stored, organized and retrieved in short time and with low cost. This paper seeks to address these issues.

Design/methodology/approach

In this context the JUMAS system, stem from the homonymous European Project (www.jumasproject.eu), takes up the challenge of exploiting semantics and machine learning techniques towards a better usability of multimedia judicial folders.

Findings

In this paper one of the most challenging issues addressed by the JUMAS project is described: extracting meaningful abstracts of given judicial debates in order to efficiently access salient contents. In particular, the authors present an ontology enhanced multimedia summarization environment able to derive a synthetic representation of judicial media contents by a limited loss of meaningful information while overcoming the information overload problem.

Originality/value

The adoption of ontology‐based query expansion has made it possible to improve the performance of multimedia summarization algorithms with respect to the traditional approaches based on statistics. The effectiveness of the proposed approach has been evaluated on real media contents, highlighting a good potential for extracting key events in the challenging area of judicial proceedings.

Details

Program, vol. 46 no. 2
Type: Research Article
ISSN: 0033-0337

Keywords

Article
Publication date: 19 December 2019

Sixing Chen, Jun Kang, Suchi Liu and Yifan Sun

This paper aims to build on the latest advances in cognitive computing techniques to systematically illustrate how unstructured data from users can offer significant value for…

1073

Abstract

Purpose

This paper aims to build on the latest advances in cognitive computing techniques to systematically illustrate how unstructured data from users can offer significant value for co-innovation.

Design/methodology/approach

The paper adopts a general overview approach to understand how unstructured data from users can be analyzed with cognitive computing techniques for innovation. The paper links the computerized techniques with marketing innovation problems with an integrated framework using dynamic capabilities and complexity theory.

Findings

The paper identifies a suite of methodologies for facilitating company co-innovation via engaging with customers and external data with cognitive computing technologies. It helps to expand marketing researchers and practitioners’ understanding of using unstructured data.

Research limitations/implications

This paper provides a conceptual framework that divides co-innovation process into three stages, ideas generation, ideas integration and ideas evaluation, and maps cognitive computing methodologies and technologies to each stage. This paper makes the theoretical contributions by developing propositions from both customer and firm perspectives.

Practical implications

This paper can be used for companies to engage consumers and external data for co-innovation activities by strategically select appropriate cognitive computing techniques to analyze unstructured data for better insights.

Originality/value

Given the lack of systematic discussion regarding what is possible from using cognitive computing to analyze unstructured data for co-innovation. This paper makes first attempt to summarize how unstructured data can be analyzed with cognitive computing techniques. This paper also integrates complexity theory to the framework from a novel perspective.

Details

European Journal of Marketing, vol. 54 no. 3
Type: Research Article
ISSN: 0309-0566

Keywords

Article
Publication date: 9 April 2018

Haroon Idrees, Mubarak Shah and Ray Surette

The growth of police operated surveillance cameras has out-paced the ability of humans to monitor them effectively. Computer vision is a possible solution. An ongoing research…

1062

Abstract

Purpose

The growth of police operated surveillance cameras has out-paced the ability of humans to monitor them effectively. Computer vision is a possible solution. An ongoing research project on the application of computer vision within a municipal police department is described. The paper aims to discuss these issues.

Design/methodology/approach

Following the demystification of computer vision technology, its potential for police agencies is developed within a focus on computer vision as a solution for two common surveillance camera tasks (live monitoring of multiple surveillance cameras and summarizing archived video files). Three unaddressed research questions (can specialized computer vision applications for law enforcement be developed at this time, how will computer vision be utilized within existing public safety camera monitoring rooms, and what are the system-wide impacts of a computer vision capability on local criminal justice systems) are considered.

Findings

Despite computer vision becoming accessible to law enforcement agencies the impact of computer vision has not been discussed or adequately researched. There is little knowledge of computer vision or its potential in the field.

Originality/value

This paper introduces and discusses computer vision from a law enforcement perspective and will be valuable to police personnel tasked with monitoring large camera networks and considering computer vision as a system upgrade.

Details

Policing: An International Journal, vol. 41 no. 2
Type: Research Article
ISSN: 1363-951X

Keywords

Article
Publication date: 30 August 2013

Vanessa El‐Khoury, Martin Jergler, Getnet Abebe Bayou, David Coquil and Harald Kosch

A fine‐grained video content indexing, retrieval, and adaptation requires accurate metadata describing the video structure and semantics to the lowest granularity, i.e. to the…

Abstract

Purpose

A fine‐grained video content indexing, retrieval, and adaptation requires accurate metadata describing the video structure and semantics to the lowest granularity, i.e. to the object level. The authors address these requirements by proposing semantic video content annotation tool (SVCAT) for structural and high‐level semantic video annotation. SVCAT is a semi‐automatic MPEG‐7 standard compliant annotation tool, which produces metadata according to a new object‐based video content model introduced in this work. Videos are temporally segmented into shots and shots level concepts are detected automatically using ImageNet as background knowledge. These concepts are used as a guide to easily locate and select objects of interest which are then tracked automatically to generate an object level metadata. The integration of shot based concept detection with object localization and tracking drastically alleviates the task of an annotator. The paper aims to discuss these issues.

Design/methodology/approach

A systematic keyframes classification into ImageNet categories is used as the basis for automatic concept detection in temporal units. This is then followed by an object tracking algorithm to get exact spatial information about objects.

Findings

Experimental results showed that SVCAT is able to provide accurate object level video metadata.

Originality/value

The new contribution in this paper introduces an approach of using ImageNet to get shot level annotations automatically. This approach assists video annotators significantly by minimizing the effort required to locate salient objects in the video.

Details

International Journal of Pervasive Computing and Communications, vol. 9 no. 3
Type: Research Article
ISSN: 1742-7371

Keywords

Content available
Article
Publication date: 1 October 2004

Peter Enser

370

Abstract

Details

Journal of Documentation, vol. 60 no. 5
Type: Research Article
ISSN: 0022-0418

Keywords

Article
Publication date: 2 September 2013

Dan Albertson

The purpose of this paper is to present a framework applicable to interactive video retrieval. The objective of the framework is so that it can be applied conceptually for…

1418

Abstract

Purpose

The purpose of this paper is to present a framework applicable to interactive video retrieval. The objective of the framework is so that it can be applied conceptually for understanding users and use of video digital libraries, and also practically for designing retrieval components like user interfaces.

Design/methodology/approach

The framework was developed through a user-centered and analytical approach, and serves as an initial attempt at generalizing how users interact when searching and browsing digital video, throughout different situations, along with the general designs that can be supportive.

Findings

The framework is two-fold, yet, together, comprises one set of conceptual findings. The first component of the framework depicts generalized user interactions throughout varying contexts of an interactive video retrieval process, followed by a second component, an illustration of the resulting supportive interface designs or sets of features. Cautions from previous studies not to over generalize the interactive process were heeded.

Research limitations/implications

The implications for such research are based on the understanding that video retrieval will benefit from the advancement of user-centered foundations, which can guide and support design decisions for resources like digital libraries.

Originality/value

The need for this study is rather straightforward: there is currently not enough conceptual research of interactive video retrieval from a user-centered perspective, which contrasts with other areas of information retrieval research where the interaction process has been thoroughly examined for a variety of domains and contexts with implications for different retrieval tools like OPACs, search engines, and article databases.

1 – 10 of 313