Search results
1 – 10 of over 2000Presents some basic features for encoding spoken texts with the TEI (text encoding initiative) scheme. Highlights the reasons for encoding spoken texts and gives a brief history…
Abstract
Presents some basic features for encoding spoken texts with the TEI (text encoding initiative) scheme. Highlights the reasons for encoding spoken texts and gives a brief history of text encoding development. An example is used to demonstrate how to encode a simple transcription using the TEI scheme. Pros and cons of text encoding are also discussed. Creating TEI‐conformant transcriptions with XML provides the possibility for researchers to retrieve original recordings via hyper‐textual pages to look for specific (or partial) features that were not included in the transcription.
Details
Keywords
Canran Zhang, Jianping Dou, Shuai Wang and Pingyuan Wang
The cost-oriented robotic assembly line balancing problem (cRALBP) has practical importance in real-life manufacturing scenarios. However, only a few studies tackle the cRALBP…
Abstract
Purpose
The cost-oriented robotic assembly line balancing problem (cRALBP) has practical importance in real-life manufacturing scenarios. However, only a few studies tackle the cRALBP using exact methods or metaheuristics. This paper aims to propose a hybrid particle swarm optimization (PSO) combined with dynamic programming (DPPSO) to solve cRALBP type-I.
Design/methodology/approach
Two different encoding schemes are presented for comparison. In the frequently used Scheme 1, a full encoding of task permutations and robot allocations is adopted, and a relatively large search space is generated. DPSO1 and DPSO2 with the full encoding scheme are developed. To reduce the search space and concern promising solution regions, in Scheme 2, only task permutations are encoded, and DP is used to obtain the optimal robot sequence for a given task permutation in a polynomial time. DPPSO is proposed.
Findings
A set of instances is generated, and the numerical experiments indicate that DPPSO achieves a tradeoff between solution quality and computation time and outperforms existing algorithms in solution quality.
Originality/value
The contributions of this paper are three aspects. First, two different schemes of encoding are presented, and three PSO algorithms are developed for the purpose of comparison. Second, a novel updating mechanism of discrete PSO is adjusted to generate feasible task permutations for cRALBP. Finally, a set of instances is generated based on two cost parameters, then the performances of algorithms are systematically compared.
Details
Keywords
Kwong‐Sak Leung, Jian‐Yong Sun and Zong‐Ben Xu
In this paper, a set of safe adaptive genetic algorithms (sGAs) is proposed based on the Splicing/Decomposable encoding scheme and the efficient speed‐up strategies developed by…
Abstract
In this paper, a set of safe adaptive genetic algorithms (sGAs) is proposed based on the Splicing/Decomposable encoding scheme and the efficient speed‐up strategies developed by Xu et al.. The proposed algorithms implement the self‐adaptation of the problem representation, selection and recombination operators at the levels of population, individual and component which commendably balance the conflicts between “reliability” and “efficiency”, as well as “exploitation” and “exploration” existed in the evolutionary algorithms. It is shown that the algorithms converge to the optimum solution in probability one. The proposed sGAs are experimentally compared with the classical genetic algorithm (CGA), non‐uniform genetic algorithm (nGA) proposed by Michalewicz, forking genetic algorithm (FGA) proposed by Tsutsui et al. and the classical evolution programming (CEP). The experiments indicate that the new algorithms perform much more efficiently than CGA and FGA do, comparable with the real‐coded GAs — nGA and CEP. All the algorithms are further evaluated through an application to a difficult real‐life application problem: the inverse problem of fractal encoding related to fractal image compression technique. The results for the sGA is better than those of CGA and FGA, and has the same, sometimes better performance compared to those of nGA and CEP.
Details
Keywords
Hadi Grailu, Mojtaba Lotfizad and Hadi Sadoghi‐Yazdi
The purpose of this paper is to propose a lossy/lossless binary textual image compression method based on an improved pattern matching (PM) technique.
Abstract
Purpose
The purpose of this paper is to propose a lossy/lossless binary textual image compression method based on an improved pattern matching (PM) technique.
Design/methodology/approach
In the Farsi/Arabic script, contrary to the printed Latin script, letters usually attach together and produce various patterns. Hence, some patterns are fully or partially subsets of some others. Two new ideas are proposed here. First, the number of library prototypes is reduced by detecting and then removing the fully or partially similar prototypes. Second, a new effective pattern encoding scheme is proposed for all types of patterns including text and graphics. The new encoding scheme has two operation modes of chain coding and soft PM, depending on the ratio of the pattern area to its chain code effective length. In order to encode the number sequences, the authors have modified the multi‐symbol QM‐coder. The proposed method has three levels for the lossy compression. Each level, in its turn, further increases the compression ratio. The first level includes applying some processing in the chain code domain such as omission of small patterns and holes, omission of inner holes of characters, and smoothing the boundaries of the patterns. The second level includes the selective pixel reversal technique, and the third level includes using the proposed method of prioritizing the residual patterns for encoding, with respect to their degree of compactness.
Findings
Experimental results show that the compression performance of the proposed method is considerably better than that of the best existing binary textual image compression methods as high as 1.6‐3 times in the lossy case and 1.3‐2.4 times in the lossless case at 300 dpi. The maximum compression ratios are achieved for Farsi and Arabic textual images.
Research limitations/implications
Only the binary printed typeset textual images are considered.
Practical implications
The proposed method has a high‐compression ratio for archiving and storage applications.
Originality/value
To the authors' best knowledge, the existing textual image compression methods or standards have not so far exploited the property of full or partial similarity of prototypes for increasing the compression ratio for any scripts. Also, the idea of combining the boundary description methods with the run‐length and arithmetic coding techniques has not so far been used.
Access to educational material has become an important issue for many stakeholders and the focus of much research worldwide. Resource discovery in educational gateways is usually…
Abstract
Access to educational material has become an important issue for many stakeholders and the focus of much research worldwide. Resource discovery in educational gateways is usually based on metadata and this is an area of important developments. Resource metadata has a central role in the management of educational material and as a result there are several important metadata standards in use in the educational domain. One of the most widely used general metadata standards for learning material is the Dublin Core Metadata Element Set. The application of this general purpose, metadata standard for complex and heterogeneous educational material is not straightforward. This paper will give an overview of some practical issues and necessary steps in deploying Dublin Core based on the LITC experience in the EASEL (Educators Access to Services in the Electronic Landscape) project.
This article discusses the use of Encoded Archival Description (EAD) as a metadata framework within applications that are primarily concerned with the provision of access to…
Abstract
This article discusses the use of Encoded Archival Description (EAD) as a metadata framework within applications that are primarily concerned with the provision of access to digital forms of archive documents. These digital forms are transcripts encoded using the Text‐Encoding Initiative (TEI) and images. The article argues that EAD, as it currently stands, is focused on the provision of metadata for original archive documents rather than for digital forms of originals, and it explores where metadata about originals and their digital forms converge and diverge. It suggests how the EAD framework can be expanded to allow for the capture of adequate metadata about both types of document and asserts that such expansion enables EAD to act as a more complete and comprehensive metadata framework in online environments. This approach to digitisation relies on the flexibility of XML technology. The article is based on the research undertaken within the LEADERS project (http://www.ucl.ac.uk/leaders‐project/).
Details
Keywords
Libraries must actively support humanities text files, but we must remember that to focus exclusively on texts tied to specific systems is to put ourselves in opposition to the…
Abstract
Libraries must actively support humanities text files, but we must remember that to focus exclusively on texts tied to specific systems is to put ourselves in opposition to the needs of the researchers we intend to serve. A working model of the sort of system and resource provision that is appropriate is described. The system, one put in place at the University of Michigan, is the result of several years of discussions and investigation. While by no means the only model upon which to base such a service, it incorporates several features that are essential to the support of these materials: standardized, generalized data; the reliance on standards for the delivery of information; and remote use. Sidebars discuss ARTFL, a textual database; the Oxford Text Archive; InteLex; the Open Text Corporation; the Text Encoding Initiative (TEI); the machine‐readable version of the Oxford English Dictionary, 2d edition; and the Center for Electronic Texts in the Humanities.
Yumeng Hou, Fadel Mamar Seydou and Sarah Kenderdine
Despite being an authentic carrier of various cultural practices, the human body is often underutilised to access the knowledge of human body. Digital inventions today have…
Abstract
Purpose
Despite being an authentic carrier of various cultural practices, the human body is often underutilised to access the knowledge of human body. Digital inventions today have created new avenues to open up cultural data resources, yet mainly as apparatuses for well-annotated and object-based collections. Hence, there is a pressing need for empowering the representation of intangible expressions, particularly embodied knowledge within its cultural context. To address this issue, the authors propose to inspect the potential of machine learning methods to enhance archival knowledge interaction with intangible cultural heritage (ICH) materials.
Design/methodology/approach
This research adopts a novel approach by combining movement computing with knowledge-specific modelling to support retrieving through embodied cues, which is applied to a multimodal archive documenting the cultural heritage (CH) of Southern Chinese martial arts.
Findings
Through experimenting with a retrieval engine implemented using the Hong Kong Martial Arts Living Archive (HKMALA) datasets, this work validated the effectiveness of the developed approach in multimodal content retrieval and highlighted the potential for the multimodal's application in facilitating archival exploration and knowledge discoverability.
Originality/value
This work takes a knowledge-specific approach to invent an intelligent encoding approach through a deep-learning workflow. This article underlines that the convergence of algorithmic reckoning and content-centred design holds promise for transforming the paradigm of archival interaction, thereby augmenting knowledge transmission via more accessible CH materials.
Details
Keywords
Chunqiu Li and Shigeo Sugimoto
Provenance information is crucial for consistent maintenance of metadata schemas over time. The purpose of this paper is to propose a provenance model named DSP-PROV to keep track…
Abstract
Purpose
Provenance information is crucial for consistent maintenance of metadata schemas over time. The purpose of this paper is to propose a provenance model named DSP-PROV to keep track of structural changes of metadata schemas.
Design/methodology/approach
The DSP-PROV model is developed through applying the general provenance description standard PROV of the World Wide Web Consortium to the Dublin Core Application Profile. Metadata Application Profile of Digital Public Library of America is selected as a case study to apply the DSP-PROV model. Finally, this paper evaluates the proposed model by comparison between formal provenance description in DSP-PROV and semi-formal change log description in English.
Findings
Formal provenance description in the DSP-PROV model has advantages over semi-formal provenance description in English to keep metadata schemas consistent over time.
Research limitations/implications
The DSP-PROV model is applicable to keep track of the structural changes of metadata schema over time. Provenance description of other features of metadata schema such as vocabulary and encoding syntax are not covered.
Originality/value
This study proposes a simple model for provenance description of structural features of metadata schemas based on a few standards widely accepted on the Web and shows the advantage of the proposed model to conventional semi-formal provenance description.
Details
Keywords
Mohamed Amine Kaaouache and Sadok Bouamama
This purpose of this paper is to propose a novel hybrid genetic algorithm based on a virtual machine (VM) placement method to improve energy efficiency in cloud data centers. How…
Abstract
Purpose
This purpose of this paper is to propose a novel hybrid genetic algorithm based on a virtual machine (VM) placement method to improve energy efficiency in cloud data centers. How to place VMs on physical machines (PMs) to improve resource utilization and reduce energy consumption is one of the major concerns for cloud providers. Over the past few years, many approaches for VM placement (VMP) have been proposed; however, existing VM placement approaches only consider energy consumption by PMs, and do not consider the energy consumption of the communication network of a data center.
Design/methodology/approach
This paper attempts to solve the energy consumption problem using a VM placement method in cloud data centers. Our approach uses a repairing procedure based on a best-fit decreasing heuristic to resolve violations caused by infeasible solutions that exceed the capacity of the resources during the evolution process.
Findings
In addition, by reducing the energy consumption time with the proposed technique, the number of VM migrations was reduced compared with existing techniques. Moreover, the communication network caused less service level agreement violations (SLAV).
Originality/value
The proposed algorithm aims to minimize energy consumption in both PMs and communication networks of data centers. Our hybrid genetic algorithm is scalable because the computation time increases nearly linearly when the number of VMs increases.
Details