Search results
1 – 10 of over 2000Glenn Porter and Robert Ebeyan
The ability to distinguish between “original” and “copied” images has been a persistent forensic imaging difficulty and can be of some importance to certain criminal and civil…
Abstract
Purpose
The ability to distinguish between “original” and “copied” images has been a persistent forensic imaging difficulty and can be of some importance to certain criminal and civil investigations. The purpose of this paper is to introduce a novel assessment criteria method that incorporates visual and metadata-based information for the purpose of determining whether images are original or second-generation duplicates (copies made by rephotographing the original hardcopy).
Design/methodology/approach
The study reflects difficulties raised from forensic cases and is modelled on fraud investigation that involved images sourced from camera phones. The method involved a new assessment-based criteria approach and the results were evaluated through their application to a sample set of second-generation images.
Findings
The evaluation confirmed the validity of several theorised detection artefacts resulting in the articulation and presentation of 17 detection criteria considered useful for supporting image analysis.
Originality/value
The result of this study is an expansion of the tools available to examiners for addressing complex image authentication problems. The criteria approach also assists with transparently communicating the details of the photo interpretation processes for review and scrutiny.
Details
Keywords
To discuss and review the shift to computer enhanced self‐monitoring CCTV surveillance systems of public spaces and the social implications of this shift.
Abstract
Purpose
To discuss and review the shift to computer enhanced self‐monitoring CCTV surveillance systems of public spaces and the social implications of this shift.
Design/methodology/approach
A review of the research and evaluation literature concerning CCTV surveillance systems culling out the history of public space CCTV systems and the concerns associated with first and second generation CCTV surveillance.
Findings
The main difference between first and second generation surveillance is the change from a “dumb camera” that needs a human eye to evaluate its images to a computer‐linked camera system that evaluates its own video images. Second generation systems reduce the human factor in surveillance and address some of the basic concerns associated with first generation surveillance systems such as data swamping, boredom, voyeurism, and profiling. Their enhanced capabilities, though, raise new concerns, particularly the expansion of surveillance and its intrusiveness.
Research limitations/implications
Additional research is needed to assess CCTV surveillance on a set of social dynamics such as informal guardianship activities by citizens.
Practical implications
The adoption of computer‐enhanced CCTV surveillance systems should not be an automatic response to a public space security problem and their deployment should not be decided simply on the technology's availability or cost.
Originality/value
This paper provides a concise overview of the concerns associated with first generation CCTV surveillance and how the evolution of computer‐enhanced CCTV surveillance systems will alter and add to these concerns. For researchers it details research questions that need to be addressed. For practitioners and government officials considering the use of public space CCTV surveillance it provides a set of issues that should be considered prior to system adoption or deployment.
Details
Keywords
B. Pradhan, K. Sandeep, Shattri Mansor, Abdul Rahman Ramli and Abdul Rashid B. Mohamed Sharif
In GIS applications for a realistic representation of a terrain a great number of triangles are needed that ultimately increases the data size. For online GIS interactive programs…
Abstract
Purpose
In GIS applications for a realistic representation of a terrain a great number of triangles are needed that ultimately increases the data size. For online GIS interactive programs it has become highly essential to reduce the number of triangles in order to save more storing space. Therefore, there is need to visualize terrains at different levels of detail, for example, a region of high interest should be in higher resolution than a region of low or no interest. Wavelet technology provides an efficient approach to achieve this. Using this technology, one can decompose a terrain data into hierarchy. On the other hand, the reduction of the number of triangles in subsequent levels should not be too small; otherwise leading to poor representation of terrain.
Design/methodology/approach
This paper proposes a new computational code (please see Appendix for the flow chart and pseudo code) for triangulated irregular network (TIN) using Delaunay triangulation methods. The algorithms have proved to be efficient tools in numerical methods such as finite element method and image processing. Further, second generation wavelet techniques popularly known as “lifting schemes” have been applied to compress the TIN data.
Findings
A new interpolation wavelet filter for TIN has been applied in two steps, namely splitting and elevation. In the splitting step, a triangle has been divided into several sub‐triangles and the elevation step has been used to “modify” the point values (point coordinates for geometry) after the splitting. Then, this data set is compressed at the desired locations by using second generation wavelets.
Originality/value
A new algorithm for second generation wavelet compression has been proposed for TIN data compression. The quality of geographical surface representation after using proposed technique is compared with the original terrain. The results show that this method can be used for significant reduction of data set.
Details
Keywords
Abdurrahman Aydemir and Arthur Sweetman
The educational and labor market outcomes of the first, first-and-a-half (1.5), second, and third generations of immigrants to the United States (US) and Canada are compared…
Abstract
The educational and labor market outcomes of the first, first-and-a-half (1.5), second, and third generations of immigrants to the United States (US) and Canada are compared. These countries’ immigration policies have diverged on important dimensions since the 1960s, resulting in large differences in immigrant source country distributions and a much larger emphasis on skill requirements in Canada, making for interesting comparisons. Of particular note is the educational attainment of US immigrants which is currently lower than that in Canada and is expected to influence future second generations causing an existing education gap to grow. This will likely in turn influence earnings where, controlling only for age, the current US second generation has earnings comparable to those of the third generation, whereas the Canadian second generation has higher earnings. Importantly, the role of, and returns to, observable characteristics are significantly different between the US and Canada. Observable characteristics explain little of the difference in earnings outcomes across generations in the US but have remarkable explanatory power in Canada. Controlling for a wide array of characteristics, especially education, has little effect on the US second generation's earnings premium, but causes the Canadian premium to become negative relative to the Canadian third generation. The Canadian 1.5 and second generations’ educational advantage is of benefit in the labor market, but does not receive the same rate of return as it does for the third generation causing a very sizable gap between the current good observed outcomes, and the even better outcomes that would be expected if the 1.5 and second generation received the same rate of return to their characteristics as the third generation. Why the US differs likely follows from a combination of its lower immigration rate, its different selection mechanism, and its settlement policies and practices.
Electronic document delivery is a concept which promises to solve end‐user problems in retrieving the primary information referenced to in bibliographical databases. This article…
Abstract
Electronic document delivery is a concept which promises to solve end‐user problems in retrieving the primary information referenced to in bibliographical databases. This article describes an approach to electronic document delivery which gradually evolved at Tilburg University over the past two years, leading to the development of a system called Ariadne. First of all, a pragmatic description of electronic document delivery is developed as a basis for a generation model of electronic document delivery systems. This model is illustrated with short references to existing systems and leads to the identification of global requirements for an Ariadne‐like system. Special attention will be paid to existing and developing standards in this field, notably the work of the Group on Electronic Document Interchange (GEDI). The remainder of the article addresses the general model of Ariadne, currently under development at Tilburg University. The article concludes with some strategic issues for libraries and publishers in this field, and a short look into the future.
Don L. Bosseau, Beth Shapiro and Jerry Campbell
EBSCO's Executive Seminar for research library directors, Digitising the reserve function: steps toward electronic document delivery, was held on 5 February in Philadelphia, PA…
Abstract
EBSCO's Executive Seminar for research library directors, Digitising the reserve function: steps toward electronic document delivery, was held on 5 February in Philadelphia, PA during the American Library Association's Midwinter Meeting. The Meeting was attended by more than 60 librarians from some of the most respected research libraries in North America.
Sam Byrd, Glenn Courson, Elizabeth Roderick and Jean Marie Taylor
Since 1995, the Library of Virginia’s Digital Library Program (DLP) has created digital images of more than 700,000 original document pages, 1,100 maps, 36,000 photographs, and…
Abstract
Since 1995, the Library of Virginia’s Digital Library Program (DLP) has created digital images of more than 700,000 original document pages, 1,100 maps, 36,000 photographs, and 1.6 million catalog card images, and has created 32 bibliographic databases with more than 330,000 MARC records, 50 electronic card indexes, and numerous electronic finding aids. The bulk of the DLP’s funding comes from the Library Services and Technology Act (LSTA) federal program, but in 1997 the Library received a grant from the Andrew W. Mellon Foundation to catalog and digitize the Virginia Historical Inventory Project (VHI). After an introduction to the DLP and VHI, this article will discuss the costs and benefits of creating the online version and will compare the one‐time development cost and subsequent delivery of the digital resource to the long‐term costs and benefits of providing access to these materials via traditional means.
Details
Keywords
Reinhard Bauer, Leszek J. Golonka, Torsten Kirchner, Karol Nitsch and Heiko Thust
Thermal properties of Pt or RuO2 thick‐film heaters made on alumina, aluminum nitride or low temperature co‐fired ceramics (LTCC) were compared in the first step of our work…
Abstract
Thermal properties of Pt or RuO2 thick‐film heaters made on alumina, aluminum nitride or low temperature co‐fired ceramics (LTCC) were compared in the first step of our work. Special holes to improve the heat distribution were included. Several heater layouts were analysed. The heat distribution was measured by an infrared camera, at different heating power. Second, the optimization of LTCC constructions was carried out. The simple structure of LTCC permitted the achievement of a high package density. It was possible to integrate a heating element made from special thick‐film ink as a buried film, inside a substrate. An important step in our technology was the making of the holes. A pattern of holes (achieved by punching or laser cutting) around the heating area permitted a changeable heat gradient. The quality of lamination and the structure of the buried elements were investigated with an ultrasonic microscope.
Details
Keywords
Abstract
Details
Keywords
Shien‐Chiang Yu and Ruey‐Shun Chen
The Internet has forced libraries to consider how to assist users to rapidly retrieve information. Such a consideration has accelerated the development of electronic publishing…
Abstract
The Internet has forced libraries to consider how to assist users to rapidly retrieve information. Such a consideration has accelerated the development of electronic publishing and has positioned the library as mediator between users and providers: archiving information circulation and providing secure copyright clearance through an efficient electronic document delivery and payment mechanism. This work develops an Extensible Markup Language (XML) framework for electronic document delivery that offers a novel electronic document delivery system and also locates publishers who can provide the copyrighted material in an electronic format via the OPAC. The proposed electronic document delivery system has four functions: (1) it enables the electronic document payment; (2) it shortens the time between inquiry and electronic document retrieval; (3) it anticipates the changing role of libraries; and (4) it reduces the printed collection load of libraries.
Details