Emerald Group Publishing Limited
Copyright © 2008, Emerald Group Publishing Limited
Article Type: Professional Literature From: Library Hi Tech News, Volume 25, Issue 4.
Lessons Learned: Usability Testing a Federated Search Product
Carole A. George, in The Electronic Library, v. 26 (2008) issue 1, pp. 5-20
This case study evaluates the use of the MetaLib software by eight members of the Carnegie Mellon University in Pittsburg, Pennsylvania. Each of the participants was carefully chosen to represent the demographic mix of the institution, in terms of under- and post-graduate, faculty, native and non-native English speakers, etc. The evaluation process entailed the library staff setting the subjects six tasks to complete (e.g. finding an article in an online journal) while simultaneously "talking through" what they were doing. This talking through was recorded by a librarian moderator, who could also, as a last resort, offer the participants advice. One prime benefit of this sort of evaluation is that it limits the amount of potentially useful information lost if feedback is recorded after task completion. The objectives of the testing were fourfold:
assess the system's overall user-friendliness;
verify the appropriateness of the terminology;
gather subjective feedback; and
identify specific problem areas and recommendations on how best to address them.
The first task required participants was to find a full-text article on a paricular topic. All began by typing their keywords into the Basic Search box, at the same time selecting a Quick Set subject category. None first checked to see whether or not they needed to log in. In fact, not logging in restricted the results to bibliographic records in the library catalogues. However, after initial assistance (one case took 15-14 min to realise), all were able to find the article and, except one, email themselves the link. The other tasks involved:
identifying a relevant bibliographic citation and adding it to a stored results list;
performing an advanced cross search of a number of databases;
finding and displaying a specific journal article;
adding a useful source for patent information to their list; and
carrying out a freestyle exploration of MetaLib.
During the course of completing these, a number of useful areas for improvement or targeted help were identified, including for example over the use of the browser's Back button, or the ambiguousness or meaning of unexpected icons (e.g. a blue "x" for delete, or the vendor-specific "SFX" icon).
Having accomplished the set tasks, participants were asked to complete a questionnaire expressing their attitudes to ten statements on a standard five-point Likert scale. In ascending order, the responses were as follows:
Prefer fewer databases shorter wait time 1.71.
Error messages clear/adequate 2.17.
Probably attend training sessions 3.00.
Links/labels easy to understand 3.00.
Icons easy to understand 3.25.
Easy to find information 3.25.
Easy to learn 3.33.
Instructions easy to understand 3.33.
Satisfied with MetaLib 3.63.
Appearance appealing 4.00.
These results suggested that participants, once they had understood the features, appreciated the ability to search multiple databases simultaneously, but that they still harboured reservations about the system's error messages. There was a marked ambivalence over the intuitiveness of the MetaLib, with six average scores clustered around the midpoint of the scale. However, the system was generally seen as satisfactory, and its appearance won a clear thumbs-up from participants.
After the qualitative data had been analysed, the research confirmed that users' interaction with a new interface was largely predicated by their familiarity with others. The initial failure to recognise the need to log in, for example, reflected not only previous experience with the stand-alone library catalogue website but also expectations based on such search engines as Google. Likewise with problems over terminology or search syntax (e.g. whether an author name is entered directly or inverted). Navigability of the system in particular was perceived as a key area for improvement in its first iteration, with participants failing to distinguish between primary and secondary navigation links.
While many identified issues could feed into the design of any training package, it was recognised that generally users would wish to use the system before receiving such training. It was therefore imperative that glaring departures from popular search engine interface norms be kept to a minimum. This in turn would enable new users to recover quickly from their early errors and progress quickly towards full competence. It is envisaged that further testing will be required to determine the extent to which the improvements in appearance, functionality and support have been successful.
A Review of the Major Projects Constituting the China Academic Digital Library
Xiangxing Shen, Zhong Zheng, Shuguang Han, and Chong Shen, in The Electronic Library, v. 26 (2008) issue 1, pp. 39-54
In 1994, as part of the Chinese government's 1993-1998 five-year technology development plan, the China Education and Research Network (CERNET) was founded. Between 2000 and 2004 China established high-speed interconnexions with the rest of the world, introducing its second generation internet (CERNET2), with transmission increasing up to 10,000 times the earliest rates. By 2007 the number of universities and other educational and research institutions connected stood at 1,500, with over 15,000 end-user communities.
This development in turn led to the establishment of the China Academic Library and Information System (CALIS) late in 1998. By 2005 CALIS was facilitating the purchasing activities of 62 consortia, comprising almost 800 universities and subscribing to 216 databases (incl. 20,000 international conference of proceedings, 10,000 journals and 100,000 e-books) at a cost of US$35 million. Within the system are modules for online cataloguing, interlibrary loans, electronic document delivery, and searching a union catalogue, containing just under 2 million records in the autumn of 2006c (Table I).
To complement the services offered by CALIS, four specialist centres have been set up to cover:
science, social sciences and humanities;
engineering and technology;
Furthermore, local governments, recognising the importance of CALIS to their regional development, have been incorporated in the latest five-year plan for information and communication technology (ICT), establishing 15 local information centres and thereby broadening the network's reach across the whole of China.
In addition to CALIS, the China Academic Digital Library (CADL) project was set up in 2005 to operate within national standards. CADL has been granted a budget of $19.5 million from public funding to scan copies of millions of Chinese and foreign-language books to enrich the electronic resources outlined above. During the current five-year plan, 22 digital library data centres are being built mostly at universities across China.
This information system, built with technical support from OCLC and other partner organisations in the USA, is based on a distributed infrastructure, with a Chinese-developed hard-/software plaform. Among its features are unified user authentication, data retrieval, e-commerce, virtual reference services, dissertation management, and a textbook reference system. The architecture is being developed for sustainable, long-term access. Construction work always follows open resource principles. Chinese investment in the system totals $1.3 million, with the bulk ($907,000) coming from the national government, while the American contribution has been valued at $10 million. One key aim is to create a resource founded on a core collection of one million books, half in Chinese (incl. 100,000 masters and doctoral theses) and half in English, for education and scientific research.
One sub-project, the ChineseAmerican Social sciences and Humanities Library, has a secure national server for storing English-language documents and/or bibliographic data on, inter alia, 12,000 journals, including all 2,528 listed in the two relevant citation indexes. The aim is that, for those items not stored electronically, full-text delivery will be achieved within one to three days of a reader's request. To date more than 74,000 requests have been handled from over 6,000 registered users. Statistics for 2005 show that 99 per cent of requests are from academia, 97 per cent of which are from individual users rather than institutions. At the same time, as the system matures, costs have come down by 20 per cent from $4.00 per request in 2004 to $3.20 in the following year.
After the first ten years of development, issues remaining to be addressed include:
benefitting further from successful American and European experience on topics like efficiency, improved use of human resources, and resource sharing, while taking into account distinctive aspects of the Chinese context;
effect of low level of central government involvement on the continued lack of a unified national programme, encompassing local government, industry and other agencies as well as academic institutions;
despite the existence of some 90 proposed national standards drafted by the China Digital Library Building Standards and Norms group, sponsored by the Ministry of Science and Technology, some regions of the country are lagging far behind others; and finally; and
the lack of effective protection for intellectual property is hampering development, with some organisations involved in constructing the digital library neglecting or ignoring copyright concerns. However, it is hoped that, with the implementation of the Information Network Transmission Protection Ordinance issued by the National Copyright Bureau in 2006, tighter controls will now enable projects to proceed along lines more closely aligned to international norms.
Preserving scientific electronic journals: a study of archiving initiatives
Golnessa Galyani Moghaddam, in The Electronic Library, v. 26 (2008) issue 1, pp. 83-96
While the essential role of scientific journals in scholarly communication remains unchallenged, the migration from print to digital formats is a cause for concern for researchers, librarians and publishers alike, particularly as regards the long-term preservation of electronic documents. By considering nine different archiving initiatives and models: those of JSTOR, Portico, E-Print Repositories, open access (OA) model, Lots of copies keep stuff safe (LOCKSS), OCLC Digital Archive and Joint Information systems Committee (JISC), this article provides an overview of the status quo.
One initial problem arises over the term "archiving", which for the computing industry tends to imply storing something securely but in a way which is not longer immediately accessible. To avoid confusion, the term "digital preservation" may therefore be preferred, meaning managing activities to ensure the retention of a resource's appearance, content and functions for as long as access to it is required, even after the original technology on or for which it was created is defunct. Likewise the terms "authentication" (i.e. the ability to prove that a resource remains what it is supposed to be) and "authenticity" (i.e. a guarantee of the quality of the content): both concepts are key to the storage and preservation and electronic journals. And coupled with the technical issues are those relating to licensing, and its affect on access for research libraries.
The JSTOR model was launched as a contribution to releasing the pressure on libraries' limited open-access shelving. It began in 1995 with digitised versions of print rather than "born-digital" journals, always one or two steps behind the most recent issue or volume. The expansion of electronic journals required a different model; and JSTOR has responded to this challenge with the Portico. The main aim of this system is to build a sustainable infrastructure and economic model to support the managed preservation of e-publishers' source files. Non-exclusive archiving licences with publishers enable Portico to ingest, normalise, archive and migrate the content of e-journals. Based on a not-for-profit model, Portico's income derives from fees paid by both publishers and libraries, with benefits to both groups should technical or financial catastrophe strike the publishers' ability to provide access from its own store.
The emphasis for E-Print Repositories, based in Italy and Spain, is one of encouraging scholars in library and information science (LIS) to deposit copies of their articles both before ("pre-print") and after ("post-print") refereeing and publication. The whole organisation is voluntary, from the maintenance of the system and the deposit of research, and access too is free. So, while the system offers and interesting resource, it can scarely be deemed comprehensive, not least because many publishers are unwilling to have their content accessed in this way. Furthermore, and peculiarly for an LIS resource, it lacks any of the rigour of controlled access points. Similar to the preceding, the OA model again facilitates free access to journal contents, this time by charging the researchers or their institutions for publication rather than changing the readers and their libraries to subscribe. The University of Lund offers a useful directory of such journals (www.doaj.org), with various (OA) projects in existence from JGate in India to RoMEO and SHERPA in the UK. Ultimately, since universities tend to be publically funded, the source of the revenue differs little from the subscription model, though the costs of OA publication are significantly lower than for the commercial equivalent. It should be noted that questions do arise over the extent to which either of these models succeed from a preservation perspective.
The LOCKSS model, based at Stanford University, relies for its perservation strategy, as its name suggests, on distributing multiple copies at low cost in persistent caches stored by subscribing institutions: who benefit by taking custody of their e-subscriptions as with the hard-copy equivalents. These copies act as a back up in the event of publisher systems failure. Already over seven-year-old, research suggests that the system is sound, satisfying the needs of publishers, large and small.
The OCLC Digital Archive began with the problem of describing and storing websites and webpages as well as digitally reformatted heritage materials. It has developed to a point where it currently has contracts to archive over 4,500 research journals. Collaborating with the Research Libraries Group and, since 2006, with LOCKSS, OCLC has sought to focus not only on facilitating but also on incentivising the preservation of digital content.
The JISC in the UK has been involved in licensing of e-journals on behalf of higher and further education institutions since 1995. And, as a key funding body for the development of digital content, JISC has a wealth of experience collaborating with numerous partners over digital preservation and access. This has led JISC to identify two key issues of concern: first, the length of time taken to negotiate and secure licences with publishers; and second, the dilemma of whether to continue preserving both the digital and hard copies and, if so, how.
PubMed Central, based in the USA and supported by the National Library of Medicine, is another open-access archive, providing free, electronic access to journal in the life sciences, restricted only by the delays imposed by publishers to availability of recent issues. Copyright remains firmly with the publishers. However, because deposit is voluntary, one weakness is that not all relevant journals are represented.
Finally, the Koninklijke Bibliotheek's e-Depot acquires up to 95 per cent of all digital publications through voluntary deposit. These are freely available, both on site and remotely, to bona fide national library users and, in the event publisher calamities, to subscribers in the Netherlands.
So, the situation as regards digital preservation of e-journals remains complex, with internationally agreed standards yet to emerge. In the interim, a number of models continue to be developed with the participation of a wide range of publishers, including some like Oxford University Press, which have chosen to be involved across several leading initiatives.
The ETS iSkillsTM Assessment: A Digital Age Tool
Mary M. Somerville, Gordon W. Smith, and Alexius Smith Macklin, in The Electronic Library, v. 26 (2008) issue 2, pp. 158-71
The predicted continuing growth in e-learning, e-business, e-commerce, e-government, e-research, etc. around the world provides a constant challenge to employers and instructors, students and employees. However, the methods for testing ICT literacy seem on careful analysis to be failing accurately to diagnose the competence of those who use them. Too often, when assessing their own skills and knowledge, people tend to express levels of confidence which exceed their abilities. Likewise the software, sometimes in the form of multiple-choice questions, designed to test these skills fail to provide an adequate picture, as the tasks set do not sufficiently match the real requirements of the course or workplace. Whereas the emphasis to date has been on demonstrating proficiency in carrying out discrete tasks, increasing recognition is being given to developing tools which test not only a person's ability to download an MP3 file or send an email for example but, more importantly, to evaluate information drawn in from a variety of sources and use ICT effectively to present and share the knowledge gained. In short, there is a need for tests where scores depend on solving complex problems.
In 2001, based on a definition of ICT literacy as "the ability to use digital technology, communication tools and/or networks appropriately to solve information problems in order to function in a knowledge society", the non-profit Educational Testing Service (ETS) based in Princeton, New Jersey, began developing an ICT literacy assessment system, now called iSkillsTM. Its purpose is to measure the capabilities of post-secondary students in seven skill areas: to define, access, manage, integrate, evaluate and communicate information in a technological environment. Informed by standards established by the Association of College and Research Libraries in the USA, the International Society for Technology in Education, the New Zealand Institute for Information Literacy, and the Council of Australian University Libraries, the two-hour long, interactive iSkills program is the result of more than 1,000 h work by ETS personnel and subject-specialists. Large-scale testing of the program began in 2003, with over 4,500 examinees across 31 US campuses. More recent evaluation has led to the development of two distinct versions, one for those studying at a high school or upper-secondary level, and the other aimed at those in tertiary education.
A few examples are cited (copyright 2007 ETS. All rights reserved) below to illustrate how iSkillsTM manages to simulate the real-life demands of contemporary computer users.
Seven supervisors have sent information about training courses to the human resources director, and she has forwarded them to you. Use this information verbatim (copy and paste) to create a memo summarizing training course attendance.
You've volunteered to create a flier for a community clean up day to be held in your neighborhood. Include the map below along with the following information and create an attractive one-page flier for the event. The event will take place on Saturday, 6 May from 1-4 p.m. at Lincoln Square Park. Event organizers want a tear-off sheet to print names, addresses, and phone numbers.
After falling awkwardly during a tennis match, your sister has been diagnosed with a rupture of the anterior cruciate ligament, or a tear of her connective tissue, in her right knee. While her condition is not an emergency, her doctor has recommended arthroscopic surgery to repair the injury and to restore her strength and mobility. You would like to find several reliable sources on the Web that recommend treatment and rehabilitation options for this condition.
In the first task, test takers have use of a simulated e-mail tool and word processor, but also need to exercise decision-making skills to sift out what information is relevant from a number of messages. In the second, knowledge of various design features in word processing software is desired. The third task requires students to show competence in searching for, evaluating, and communicating information found on the internet. All key information literacy skills.
To summarise, the iSkillsTM assessment appears to substantiate observations of librarians, that familiarity with freely accessible search tools (e.g. Google) does not necessarily translate in to good ICT literacy skills. As indicated above, this research showed that, while 90 per cent of students in one group taking the test rated themselves highly skilled users of information technologies, 52 per cent of the same group performed below the level achieved by 50 per cent of the entire test population. Two key areas for targeted improvement were identified:
defining an information need; and
accessing appropriate resources.
For example, 88 per cent of students failed to understand the precise information need set in the tasks and, as a result, search strategies tended to be vague, the choice of keywords too broad, and little or no use was made of operators to limit or expand searches.
Obviously, devising tests of equivalent levels of difficulty which are appropriate to students in different disciplines requires considerable input from teams of subject specialists. However, the existence of a single, standardised test such as iSkills™ enables information specialists and educators to compare the effectiveness of their ICT literacy training programmes with the results of other similar institutions in their region.
An Innovative Web-Based Approach for Study Skills Development in Higher Education
Caitriona Bermingham and Abdulhussain E. Mahdi, in International Journal of Web Information Systems, v. 3 (2007) issue 3, pp. 212-30
Higher education institutions worldwide are under pressure to improve the level of training offered to students in study/professional skills. Quandaries exist in particular over preferred methods of delivery, whether face-to-face or automated, the pace of such training, whether it should be complusory, and the extent to which it can and should be customised to the needs and learning styles of individual students. This research considered the situation as it affected the University of Limerick in Ireland and, taking into account the evidence of findings of related studies, it developed and tested a new web-based learning content management system (LCMS), called the "Skills SuperStore".
In additon to a literature review and a practical survey of approaches applied in the UK, USA and elsewhere, this project conducted a targeted, voluntary analysis of the needs of 150 students, mostly in their first year, and 50 faculty members. The questions in this local survey were categorised as:
problems faced in studying, reviewing or learning new material;
personal study habits; and
possible solutions and preferred training methods.
By far the most common area of difficulty raised in the first category was that of effective planning and time-management, followed at some distance by revising and so on, through project work, preparing for exams, understanding course material, effective reading and report/essay writing on to, at the bottom end of the scale, note-taking and learning blockages. In the second category, the survey found 50 per cent of students tended to study in environments prone to distractions, while 64 per cent felt it adequate to devote 5 h or fewer to study. As for the third category, of the four main choices presented (conventional workshops, modules integrated within the curriculum, audio-visual materials, or an online system), 60 per cent of students stated a preference for an online system.
Drawing together the findings of each survey phase, the project team decided to move away from the predominantly linear approach of earlier LCMS in favour of much greater personalisation. Beginning with easy-to-use administrator functions, enabling a non-ICT expert staff from the university's skills training unit to create a wide range of templates, the Skills SuperStore allows students to pick those modules which match their current needs, and to follow them at their own pace. By providing answer-dependent links within the LCMS, students are directed to successive modules which are targetted to their particular needs, and automatically skipping other areas which experience shows to be less relevant. For example, in relation to a module on time management, questions gauge whether or not the student's motivation is extrinsic or intrinsic and respond accordingly, both with appropriate feedback and by progressing to the next module which best fits the response.
Further innovative aspects of the Skill SuperStore include the provision of a "discussion room" and notice board to facilitate peer-assisted learning by sharing problems and solutions with other students. The system also makes use of the so-called "persona effect", whereby an animated life-like character, or interface agent, appears to interact with the user, seeks to turn each learning session into more of a humanhuman rather than a mere humancomputer experience.
With the initial version of the Skills SuperStore ready, 20 students were asked to complete two skills training sessions of 45-50 min each, and then to assess their impressions of the system's: interest level; visual presentation; learning styles; information value; navigability; and interactivity. The results showed that general satisfaction with the system was high, including 80 per cent who indicated that their learning experience had been either somewhat or significantly enhanced. Further testing now needs to be planned to evaluate the long-term benefits of the system in developing students' study/professional skills during the course of their degrees.
Mubser: A Bilingual Braille to Text Translation with an Arabic Interface
AbdulMalik Al-Salman, Mohamed Alkanhal, Yousef AlOhali, Hazem Al-Rashed, and Bander Al-Sulami, in International Journal of Web Information Systems, v. 3 (2007) issue 3, pp. 257-71
The Braille system of writing has been adapted for use by blind people around the world with many languages, including Arabic and English, adopting both letter-by-letter (Grade 0) and contracted (Grade 1) approaches. In "Grade 1" Braille, to speed up the reading process, common letter combinations and frequent words (e.g. in English /ch/, /sh/, "the", etc.) can be represented by a single cell. However, with increased access to born-digital documents stored in electronic formats, it is often easier for blind people to use screen-reading software. While a number of OCR software packages already exist for reading embossed Braille texts in English, the same is not yet available for Arabic-speakers, for there is the added need to make use of bilingual texts in Arabic and English. The aim of this research has therefore been to produce a system capable of converting Braille to ordinary text so that it can in turn be interpreted by Arabic screen-reading software.
To solve the problem, the software developed needed to distinguish first the initial language (Arabic or English) of the text and second the grade of Braille (letter-by-letter or contracted) used. The approach adopted was to recognise first the punctuation, and then to seek out certain cell strings corresponding with the most frequently occurring words in either language, typically conjunctions, propositions, pronouns and auxilliary verbs (i.e. but, from, they, are, etc.). In implementing this, care had to be taken to define lists of cell strings to be searched which were unique to each language. The researchers had in particular to eradicate all scope for confusion (e.g. the cell which represents the letter "c" in English is used in Grade 2 Arabic for the prefix "al-", approximating in meaning to the definite article in English).
By coding the rules necessary for automatic language and grade identification in text, database or XML format, the research also successfully been applied to the "translation" of alphabetic texts to Braille and vice versa. With the appropriate peripherals, the resulting program is able to output hardcopy in both embossed Braille and printed text. Future enhancements include spell-checking, to resolve errors arising from conflicting Arabic Braille conventions, and also the ability to support mathematical and scientic symbols, as well as adding further languages.
1. Editor's note: Many of the Braille cell arrangements used for writing English match those for the equivalent sounds of letters in Arabic (e.g. /b/, /d/, /f/, /sh/, etc.). The remaining cells, such as those for the English sounds /c/, /ch/, /p/ or /x/ which do not exist in Arabic, are used in Arabic Braille for sounds or contractions which are not found in English. Thus Braille readers in either language can recognise most of the cells without understanding the text, just as English-speaker can recognise all the characters of a text in Hungarian, Turkish or Xhosa and have a stab at pronouncing it, yet without understanding a word.
Roderic VassieHead of Publishing at Microform Academic Publishers, Wakefield UK. He was formerly curator at the British Library, selection officer at the Library of Congress and bibliographic coordinator at the UAE University.