Search results
1 – 10 of 225Rajeswari S. and Sai Baba Magapu
The purpose of this paper is to develop a text extraction tool for scanned documents that would extract text and build the keywords corpus and key phrases corpus for the document…
Abstract
Purpose
The purpose of this paper is to develop a text extraction tool for scanned documents that would extract text and build the keywords corpus and key phrases corpus for the document without manual intervention.
Design/methodology/approach
For text extraction from scanned documents, a Web-based optical character recognition (OCR) tool was developed. OCR is a well-established technology, so to develop the OCR, Microsoft Office document imaging tools were used. To account for the commonly encountered problem of skew being introduced, a method to detect and correct the skew introduced in the scanned documents was developed and integrated with the tool. The OCR tool was customized to build keywords and key phrases corpus for every document.
Findings
The developed tool was evaluated using a 100 document corpus to test the various properties of OCR. The tool had above 99 per cent word read accuracy for text only image documents. The customization of the OCR was tested with samples of Microfiches, sample of Journal pages from back volumes and samples from newspaper clips and the results are discussed in the summary. The tool was found to be useful for text extraction and processing.
Social implications
The scanned documents are converted to keywords and key phrases corpus. The tool could be used to build metadata for scanned documents without manual intervention.
Originality/value
The tool is used to convert unstructured data (in the form of image documents) to structured data (the document is converted into keywords, and key phrases database). In addition, the image document is converted to editable and searchable document.
Details
Keywords
Hongyu Zhao, Zhelong Wang, Qin Gao, Mohammad Mehedi Hassan and Abdulhameed Alelaiwi
The purpose of this paper is to develop an online smoothing zero-velocity-update (ZUPT) method that helps achieve smooth estimation of human foot motion for the ZUPT-aided…
Abstract
Purpose
The purpose of this paper is to develop an online smoothing zero-velocity-update (ZUPT) method that helps achieve smooth estimation of human foot motion for the ZUPT-aided inertial pedestrian navigation system.
Design/methodology/approach
The smoothing ZUPT is based on a Rauch–Tung–Striebel (RTS) smoother, using a six-state Kalman filter (KF) as the forward filter. The KF acts as an indirect filter, which allows the sensor measurement error and position error to be excluded from the error state vector, so as to reduce the modeling error and computational cost. A threshold-based strategy is exploited to verify the detected ZUPT periods, with the threshold parameter determined by a clustering algorithm. A quantitative index is proposed to give a smoothness estimate of the position data.
Findings
Experimental results show that the proposed method can improve the smoothness, robustness, efficiency and accuracy of pedestrian navigation.
Research limitations/implications
Because of the chosen smoothing algorithm, a delay no longer than one gait cycle is introduced. Therefore, the proposed method is suitable for applications with soft real-time constraints.
Practical implications
The paper includes implications for the smooth estimation of most types of pedal locomotion that are achieved by legged motion, by using a sole foot-mounted commercial-grade inertial sensor.
Originality/value
This paper helps realize smooth transitions between swing and stance phases, helps enable continuous correction of navigation errors during the whole gait cycle, helps achieve robust detection of gait phases and, more importantly, requires lower computational cost.
Details
Keywords
The paper aims to discuss error detection and correction in Kashmiri carpet weaving (KCW), mediated by cryptographic code, Talim which is held to guarantee accurate information…
Abstract
Purpose
The paper aims to discuss error detection and correction in Kashmiri carpet weaving (KCW), mediated by cryptographic code, Talim which is held to guarantee accurate information transference from designing to weaving, even after hundred years. Yet, carpets often show errors on completion.
Design/methodology/approach
Human factors analysis revealed error emergence, detection and correction in this practice whose task domains are distributed over large geographies (from in-premises to several kilometers) and timescales (from days to decades). Using prospective observation method, production process of two research carpets from their design, coding and weaving was observed while noting the errors made, identified and corrected by actors in each phase.
Findings
The errors were found to emerge, identified and corrected during different phases of designing, coding and weaving while giving rise to fresh errors in each phase, due to actors’ normal work routines.
Originality/value
In view of this, usual branding of “weaver-error” behind flawed carpet turns out to be misplaced value judgment passed in hindsight.
Details
Keywords
Lokesh Singh, Rekh Ram Janghel and Satya Prakash Sahu
Automated skin lesion analysis plays a vital role in early detection. Having relatively small-sized imbalanced skin lesion datasets impedes learning and dominates research in…
Abstract
Purpose
Automated skin lesion analysis plays a vital role in early detection. Having relatively small-sized imbalanced skin lesion datasets impedes learning and dominates research in automated skin lesion analysis. The unavailability of adequate data poses difficulty in developing classification methods due to the skewed class distribution.
Design/methodology/approach
Boosting-based transfer learning (TL) paradigms like Transfer AdaBoost algorithm can compensate for such a lack of samples by taking advantage of auxiliary data. However, in such methods, beneficial source instances representing the target have a fast and stochastic weight convergence, which results in “weight-drift” that negates transfer. In this paper, a framework is designed utilizing the “Rare-Transfer” (RT), a boosting-based TL algorithm, that prevents “weight-drift” and simultaneously addresses absolute-rarity in skin lesion datasets. RT prevents the weights of source samples from quick convergence. It addresses absolute-rarity using an instance transfer approach incorporating the best-fit set of auxiliary examples, which improves balanced error minimization. It compensates for class unbalance and scarcity of training samples in absolute-rarity simultaneously for inducing balanced error optimization.
Findings
Promising results are obtained utilizing the RT compared with state-of-the-art techniques on absolute-rare skin lesion datasets with an accuracy of 92.5%. Wilcoxon signed-rank test examines significant differences amid the proposed RT algorithm and conventional algorithms used in the experiment.
Originality/value
Experimentation is performed on absolute-rare four skin lesion datasets, and the effectiveness of RT is assessed based on accuracy, sensitivity, specificity and area under curve. The performance is compared with an existing ensemble and boosting-based TL methods.
Details
Keywords
Alexander M. Robertson and Peter Willett
This paper provides an introduction to the use of n‐grams in textual information systems, where an n‐gram is a string of n, usually adjacent, characters extracted from a section…
Abstract
This paper provides an introduction to the use of n‐grams in textual information systems, where an n‐gram is a string of n, usually adjacent, characters extracted from a section of continuous text. Applications that can be implemented efficiently and effectively using sets of n‐grams include spelling error detection and correction, query expansion, information retrieval with serial, inverted and signature files, dictionary look‐up, text compression, and language identification.
Details
Keywords
Xiaochun Tian, Jiabin Chen, Yongqiang Han, Jianyu Shang and Nan Li
This study aims to design an optimized algorithm for low-cost pedestrian navigation system (PNS) to correct the heading drift and altitude error, thus achieving high-precise…
Abstract
Purpose
This study aims to design an optimized algorithm for low-cost pedestrian navigation system (PNS) to correct the heading drift and altitude error, thus achieving high-precise pedestrian location in both two-dimensional (2-D) and three-dimensional (3-D) space.
Design/methodology/approach
A novel heading correction algorithm based on smoothing filter at the terminal of zero velocity interval (ZVI) is proposed in the paper. This algorithm adopts the magnetic sensor to calculate all the heading angles in the ZVI and then applies a smoothing filter to obtain the optimal heading angle. Furthermore, heading correction is executed at the terminal moment of ZVI. Meanwhile, an altitude correction algorithm based on step height constraint is proposed to suppress the altitude channel divergence of strapdown inertial navigation system by using the step height as the measurement of the Kalman filter.
Findings
The verification experiments were carried out in 2-D and 3-D space to evaluate the performance of the proposed pedestrian navigation algorithm. The results show that the heading drift and altitude error were well corrected. Meanwhile, the path calculated by the novel algorithm has a higher match degree with the reference trajectory, and the positioning errors of the 2-D and 3-D trajectories are both less than 0.5 per cent.
Originality/value
Besides zero velocity update, another two problems, namely, heading drift and altitude error in the PNS, are solved, which ensures the high positioning precision of pedestrian in indoor and outdoor environments.
Details
Keywords
Martin Langner and David Sanders
Simple and affordable systems are described to assist wheelchair users in steering their wheelchairs across sloping ground. The systems can be attached to many standard powered…
Abstract
Simple and affordable systems are described to assist wheelchair users in steering their wheelchairs across sloping ground. The systems can be attached to many standard powered wheelchairs. Wheelchairs often steer by having two swivelling caster wheels but problems with this configuration occur when a wheelchair is driven along sloping ground because the casters can swivel in the direction of the slope. Gravity then causes the wheelchair to start an unwanted turn or ‘veer’ and the chair goes in an unintended direction. This situation is exacerbated for switch users, as switches cannot provide fine control to trim and compensate. Early experiments demonstrated that calibrating wheelchair controllers for straight‐line balance and optimising motor‐compensation did not solve this problem. Caster angle was selected to provide feedback to the wheelchair controllers. At the point when veer is first detected, a wheelchair has already begun to alter course and the job of the correction system is to minimise this drift from the desired course. A rolling road was created as an assessment tool and trials with both the test bed and in real situations were conducted to evaluate the new systems. The small swivel detector that was created could be successfully attached to caster swivel bearings. The new system was successful, robust and was not affected by changeable parameters. Although primarily intended for switch users, the methods can be applied to users with proportional controls.
Details
Keywords
“It should also be noted that the objective of convergence and equal distribution, including across under-performing areas, can hinder efforts to generate growth. Contrariwise…
Abstract
“It should also be noted that the objective of convergence and equal distribution, including across under-performing areas, can hinder efforts to generate growth. Contrariwise, the objective of competitiveness can exacerbate regional and social inequalities, by targeting efforts on zones of excellence where projects achieve greater returns (dynamic major cities, higher levels of general education, the most advanced projects, infrastructures with the heaviest traffic, and so on). If cohesion policy and the Lisbon Strategy come into conflict, it must be borne in mind that the former, for the moment, is founded on a rather more solid legal foundation than the latter” European Commission (2005, p. 9)Adaptation of Cohesion Policy to the Enlarged Europe and the Lisbon and Gothenburg Objectives.
Domenico Campa, Alberto Quagli and Paola Ramassa
This study reviews and discusses the accounting literature that analyzes the role of auditors and enforcers in the context of fraud.
Abstract
Purpose
This study reviews and discusses the accounting literature that analyzes the role of auditors and enforcers in the context of fraud.
Design/methodology/approach
This literature review includes both qualitative and quantitative studies, based on the idea that the findings from different research paradigms can shed light on the complex interactions between different financial reporting controls. The authors use a mixed-methods research synthesis and select 64 accounting journal articles to analyze the main proxies for fraud, the stages of the fraud process under investigation and the roles played by auditors and enforcers.
Findings
The study highlights heterogeneity with respect to the terms and concepts used to capture the fraud phenomenon, a fragmentation in terms of the measures used in quantitative studies and a low level of detail in the fraud analysis. The review also shows a limited number of case studies and a lack of focus on the interaction and interplay between enforcers and auditors.
Research limitations/implications
This study outlines directions for future accounting research on fraud.
Practical implications
The analysis underscores the need for the academic community, policymakers and practitioners to work together to prevent the destructive economic and social consequences of fraud in an increasingly complex and interconnected environment.
Originality/value
This study differs from previous literature reviews that focus on a single monitoring mechanism or deal with fraud in a broadly manner by discussing how the accounting literature addresses the roles and the complex interplay between enforcers and auditors in the context of accounting fraud.
Details
Keywords
HEATHER J. ROGERS and PETER WILLETT
An increasing volume of historical text is being converted into machine‐readable form so as to allow database searches to be carried out. The age of the material in these…
Abstract
An increasing volume of historical text is being converted into machine‐readable form so as to allow database searches to be carried out. The age of the material in these databases means that they contain many spellings that are different from those used today. This characteristic means that, once the databases become available for general online access, users will need to be familiar with all of the possible historical spellings for their topic of interest if a search is to be carried out successfully. This paper investigates the use of computational techniques that have been developed for the correction of spelling errors to identify historical spellings of a user's search terms. Two classes of spelling correction method are tested, these being the reverse error and phonetic coding methods. Experiments with words from the Hartlib Papers Collection show that these methods can correctly identify a large number of historical forms of modern‐day word spellings.