Search results1 – 10 of over 95000
Along with organisations in other fields, retailers have been using computers in management systems since the mid‐1960s and, in some cases, much earlier. Over this period, there have been dramatic changes in the computer technology available for use by management, together with considerable accumulated experience in using them, particularly in retailing. However, this has, in many industries, been offset by an increase in the problems facing managers; in retailing, for instance, companies now have to face economies in which disposable incomes have been squeezed, whilst buying patterns are changing rapidly and becoming difficult to predict. A consequence of this is that to survive today retailers must be far better at product range planning, cash planning and control of capital than they needed to be in the 1960s. They may be helped in this by an increasing understanding of how to manage product range, cash flow or funds allocation problems, and also by the availability of more advanced computing facilities which allow managers to apply this understanding more effectively. These facilities vary from the range of computers on offer (mainframe to micros) to data flow networks, automated data input, visual display terminals and specialist soft‐ware for retail planning and control (e.g. distribution packages).
The writer contends that the mini‐computer provides a very substantial potential area of application for the processing of information, management control and physical control of warehouse plant. Mini‐computers can be coupled with a whole range of peripheral input/output devices, from which can be developed configurations extending from very simple systems costing as little as £3,500.
This paper seeks to discuss measurement units by comparing the internet use and the traditional media use, and to understand internet use from the traditional media use…
This paper seeks to discuss measurement units by comparing the internet use and the traditional media use, and to understand internet use from the traditional media use perspective.
Benefits and shortcomings of two log file types will be carefully and exhaustively examined. Client‐side and server‐side log files will be analyzed and compared with proposed units of analysis.
Server‐side session time calculation was remarkably reliable and valid based on the high correlation with the client‐side time calculation. The analysis result revealed that the server‐side log file session time measurement seems more promising than the researchers previously speculated.
An ability to identify each individual user and low caching problems were strong advantages for the analysis. Those web design implementations and web log data analysis scheme are recommended for future web log analysis research.
This paper examined the validity of the client‐side and the server‐side web log data. As a result of the triangulation of two datasets, research designs and propose analysis schemes could be recommended.
VINE is produced at least four times a year with the object of providing up‐to‐date news of work being done in the automation of library house‐keeping processes, principally in the UK. It is edited and substantially written by the Information Officer for Library Automation based in Southampton University Library and supported by a grant from the British Library Research and Development Department. Copyright for VINE articles rests with the British Library Board, but opinions expressed in VINE do not necessarily reflect the news and policies of the British Library. The subscription for VINE in 1981 will be £20 for UK subscribers and £23 for overseas subscribers — the subscription year runs from January to December and VINE is available in either paper or microfiche format.
A survey of the current state of documentation practice in museums is presented. This concentrates on the broad themes of the practice, making comparisons with analogous…
A survey of the current state of documentation practice in museums is presented. This concentrates on the broad themes of the practice, making comparisons with analogous library procedures, where appropriate. A brief introduction to museums and their organizational framework within the United Kingdom is given. With this as background, the methods of documentation used by museums are reviewed, and a survey presented of current developments on an international and national scale.
The purpose of this article is to contribute to our stock of knowledge about who uses networks, how they are used, and what contribution the networks make to advancing the…
The purpose of this article is to contribute to our stock of knowledge about who uses networks, how they are used, and what contribution the networks make to advancing the scientific enterprise. Between 1985 and 1990, the Survey of Income and Program Participation (SIPP) ACCESS data facility at the University of Wisconsin‐Madison provided social scientists in the United States and elsewhere with access through the electronic networks to complex and dynamic statistical data; the 1984 SIPP is a longitudinal panel survey designed to examine economic well‐being in the United States. This article describes the conceptual framework and design of SIPP ACCESS; examines how network users communicated with the SIPP ACCESS project staff about the SIPP data; and evaluates one outcome derived from the communications, the improvement of the quality of the SIPP data. The direct and indirect benefits to social scientists of electronic networks are discussed. The author concludes with a series of policy recommendations that link the assessment of our inadequate knowledge base for evaluating how electronic networks advance the scientific enterprise and the SIPP ACCESS research network experience to the policy initiatives of the High Performance Computing Act of 1991 (P.L. 102–194) and the related extensive recommendations embodied in Grand Challenges 1993 High Performance Computing and Communications (The FY 1993 U.S. Research and Development Program).
First of all, I must apologise for the interval between this VINE and the last. Unfortunately VINE's production cycle is growing longer as automated library systems become more complex, and consequently more time‐consuming to write up. Moreover, in this issue I have attempted in certain articles, for instance those on COM bureaux and the Telepen, to adopt a thematic approach to the subject, rather than reporting on individual projects. The process of cross‐checking the details of such articles with all the organisations concerned has been partly responsible for the delay in publishing VINE 17. Nevertheless in the long terms I still hope to increase the frequency with which VINE is published, thereby increasing its currency and decreasing the size of each individual issue.
Computer matching is a mass surveillance technique involving thecomparison of data about many people, which have been acquired frommultiple sources. Its use offers…
Computer matching is a mass surveillance technique involving the comparison of data about many people, which have been acquired from multiple sources. Its use offers potential benefits, particularly financial savings. It is also error‐prone, and its power results in threats to established patterns and values. The imperatives of efficiency and equity demand that computer matching be used, and the information privacy interest demands that it be used only where justified, and be subjected to effective controls. Provides background to this important technique, including its development and application in the USA and in Australia, and a detailed technical description. Contends that the technique, its use, and controls over its use are very important issues which demand research. Computing, telecommunications and robotics artefacts which have the capacity to change society radically need to be subjected to early and careful analysis, not only by sociologists, lawyers and philosophers, but also by information technologists themselves.