Article Summaries

Library Hi Tech News

ISSN: 0741-9058

Article publication date: 7 March 2008



Vassie, R. (2008), "Article Summaries", Library Hi Tech News, Vol. 25 No. 2/3.



Emerald Group Publishing Limited

Copyright © 2008, Emerald Group Publishing Limited

Article Summaries

Article Type: Professional Literature From: Library Hi Tech News, Volume 25, Issue 2/3.

Test Selection Criteria for Real-Time Systems Modeled as Timed Input-Output Automata

Abdeslam En-Nouaary, in International Journal of Web Information Systems, vol. 3 (2007) issue 4, pp. 279-92

The heavy reliance placed on computer software to deliver services which impact on all areas of contemporary day-to-day life underlines the importance of testing systems prior to deployment. The implementation under test (IUT) process is a crucial one, and entails generating sequences of inputs, with the reaction checked against the desired outputs in order to ascertain the quality of product. Vital to this process is choosing a sequence which takes the software logically through each of its paces, thereby revealing every possible fault. Ultimately, the extent to which the IUT process can be deemed fit for purpose is entirely dependent on the robustness of the selection criteria to determine the test sequence.

Whereas algorithms have been developed which assist in generating the criteria to timed test sequences, they have only limited application in relation to real-time systems, because they lead either to an unmanageable number of sequences or to the problem known as "state explosion". To overcome this, a new method of timed inputoutput automata (TIOA) has been modeled. This research proposes not so much a single specific method but rather outlines a hierarchy of test selection criteria, which in turn offers guidance to the practitioner in choosing a systematic method to select appropriate sequences to adequate coverage of the test system's capability.

After examining numerous test criteria, it becomes evident that the most effective criterion is that which requires every state in the TIOA to be exercised. In practice this is impossible due the infinite number of possible traces. However, the test criteria presented in the research, while not achieving all-states coverage, have already proved suitable for a range of faults. Furthermore, development of incremental testing methods based on these criteria, and directed at preventing the inherent state explosion problem, continues.

Patterns and Transitions of Query Reformulation During Web Searching

Bernard J. Jansen, Mimi Zhang and Amanda Spink, in International Journal of Web Information Systems, vol. 3 (2007) issue 4, pp. 328-40

The purpose of this research is to analyse, using pattern-recognition tools, the predictability of the strategies employed by ordinary individuals to search the Internet and then to apply the findings to the provision of targeted assistance which suggests reformulated alternative queries. Based on a web log analysis drawn from over four million transaction records for the search engine, which combines the results of Ask, Google, MSN and Yahoo, a subset of around 25 per cent was then extracted by rejecting all discrete sessions with 100 or more queries. In this way it was hoped to exclude the bulk of queries generated by non-human agents, and thereby address fundamental research question of how people modify their queries during the interaction with the search engine.

The earliest work in this field considered search strategies in automated library catalogues, with results sorted into the two broad categories of moves: "operational", in which one adds or subtracts keywords to modify the results; and "conceptual", in which the choice of keywords alters. The present research builds on such underlying principles, equivalent to the familiar catalogue terminology of "broader term" (BT), "narrower term" (NT), "related term" (RT) and "synonym" (UF), which remain largely untouched but for the addition of content type (i.e. Web, video, audio, images, news).

The general findings showed that, in common with compatible earlier studies on other search engines, almost two-thirds of searches are new and just over one-third represent reformulations of the original query. That established, the research proceeded to analyse in detail the 126,901 identified instances of query reformulation. Here it was revealed that the largest proportion (28 per cent) of all amended queries, were attempts to narrow the search results by adding terms, with marginally fewer (26 per cent) making use of the system-generated assistance (i.e. "Did you mean") often due to variant or misspellings, dropping to 17 per cent for amendments to content type. Next in frequency came attempts to narrow (11 per cent) and to broaden (9 per cent) the search results by altering the core keywords, for example using synonyms. The least common move of all in initial changes to queries was towards generalisation (only 8 per cent) by subtracting keywords. Further analysis was conducted of all third-step searches, revealing a strong tendency to broaden the search (39 per cent) in order to compensate for overspecialisation at the previous step, and vice versa if the previous hits appeared too general. In effect, the authors conclude, the process of refining one's search is analogous to moving a camera lens when taking a photograph, making smaller and smaller adjustments until the target is in focus.

By contrast with the variety of semantic changes outlined above, relatively little switching was noticed between categories for content type, with most of such changes being from web to images (37 per cent) or images to web (21 per cent).

Overall the research demonstrated that the existence of predictable patterns during web searching. Most adjustments are to narrow or broaden the previous set of results, with the preferred tactic being, respectively, to add or take away keywords. Based on this knowledge, it should be possible to refine algorithms presently used by search engines to include not only orthographic (i.e. "Did you mean?") but also semantic (e.g. "Might this be more accurate?") assistance.

Developing Individual and Group Attributes for Effective Learning in e-Learning Communities

Ray Webster and Fay Sudweeks, in Journal of Systems and Information Technology, vol. 9 (2007) issue 2, pp. 143-54

Understanding how individuals and groups interact in virtual learning environments (VLE), and developing ways to optimise that interaction, the authors argue, is key to harnessing the full potential of e-learning. VLE are increasingly used as a technique for course content delivery not only in the context of distance learning but across a densely populated university sector, in which face-to-face contact with tutors stands at a premium, and the expectation to a great extent is on students to be self-regulating and autonomous. At the same time, due to the globalisation of knowledge-based occupations reliant on similar practices of distributed virtual environments, businesses also have a stake in universities preparing graduates to be effective in such environments. Hence high priority should be accorded to understanding, monitoring and improving the processes involved.

Metacognition, the ability of students to reflect on what makes one effective in a given learning situation, is identified as the core competence for VLE to work. Other key factors include the impact of group dynamics, its size and cohesion, on communication, and likewise the relative benefits of allowing emergent, compared to appointed, group leadership.

In terms group cohesion, the need to foster a strong sense of community within the VLE is recognised as both vital and difficult. To ensure successful e-learning, the emphasis must be in particular on developing shared goals and trust between members. As regards group leadership, unlike in the real world, in which "talkativity" is flagged as most important, in virtual environments what is said is perceived as no less important than how frequently and how much a person speaks. In fact, research has shown that, in VLE and virtual work environments, the leaders who emerge tend to exhibit the following traits: they participate in discussion early and often; they focus on communication quality as well as quantity; they demonstrate competence; and they help build cohesion.

The implementation phase of this research will examine those individual and group factors that lead to successful group e-learning and project work. The focus will be on key drivers in the innovation process required by the projects. If possible, the analysis will be extended to international students, giving the research a cross-cultural, global perspective. Because of the differences between them, different methodologies are required to study virtual, as opposed to face-to-face, learning. Complementary explorative data analysis (CEDA) and reflective and participatory approach to design (RAPAD) are two such methodologies.

CEDA consists of three iterative stages: qualitative induction, analysis, and refinement. From the initial stage, a fuzzy picture emerges which, through successive stages, facilitates progress towards the desired changes. The RAPAD methodology stimulates reflection with the context of the participatory process. This starts with each student reflective individually on her or his personal cognitive profile, moving on to the project group. Comparative discussions within the group lead in turn to the development of group cognitive profiles, all of which feeds into the development of a VLE design to support the learning of the group members. The key features of this process include that it:

  • encourages participating students to develop the metacognitive and self-regulatory skills;

  • engages them in the achievement of a negotiated end product;

  • offers them a framework within which to work;

  • helps them to further their understanding of learning and creative work; and

  • provides, on successful completion, a resource to enable future collaboration.

Finally, while their primary aim has been to support educators in Australia to identify the dynamics of collaborative e-learning nationally, the authors see gains in our understanding of VLE processes as applying equally to the nurturing and management of innovation in a multicultural knowledge society.

An Approach to Sustainability for Information Systems

Craig Standing and Paul Jackson, in Journal of Systems and Information Technology, vol. 9 (2007) issue 2, pp. 167-76

The notion of sustainable development, the authors state, has been with us since 1987. Now, in many areas of business, the issue of corporate social responsibility has expanded to include not only the economic and social impacts of industry but also the environmental, all three factors increasingly appearing in companies' annual reports in the so-called "triple bottom line". And yet, in all the intervening 30 years, the impact of sustainability on the development and management of information systems (IS) appears to have been minimal. This is partly because, the recycling of old computer parts aside, the main thing an organisation wastes due to inefficient IS implementations is not carbon, on which we see tighter and tighter legislation, but time. And to wasted time can be added overproduction, rework, wasted motion, wasted intellect and inventory waste.

In terms of IS, the sustainability has to do with the extent to which it contributes to increases in efficiency helping to cut costs, and through the promotion of innovation and growth in market share. To date businesses have often behaved as though sustainable efficiency gains are the inevitable result of the mere fact of introducing new IS. However, through structured discussions with managers engaged in part-time MBA study, the authors found numerous instances to the contrary, for example:

  1. 1.

    An organisation implements a new system which automates a number of knowledge-based tasks and so is able to employ less qualified and therefore cheaper personnel. Two years later, due to a change in the market, the organisation lacks the trained and experienced staff to take timely advantage of a significant growth opportunity.

  2. 2.

    A very large system is commissioned to change radically how a business is run. Despite extensive training in change management, insufficient consideration is accorded to software handover and maintenance. As a result, by the time full realisation of the scale of bugs and additional required features has dawned, the programmers involved in the original development of the system have long since moved on to new projects.

  3. 3.

    In the run up to the year 2000, a need arose within an organisation to rewrite modules for critical systems in different departments. Instead of coordinating their approach, defining common services, common data, etc., each unit addresses its problem independently. Outcome: poor communication in the design phase results in inflexibility and wasting time in operating the system post-implementation.

Having analysed numerous such examples, the authors have devised a set of seven IS principles:

  • awareness of sustainability and how it applies to work environments;

  • awareness that business efficiency and cost are not the sole criteria for IS decision-making;

  • stakeholder involvement to understand how systems interact and evolve;

  • optimisation of IT resources available to IS design;

  • consideration of organisation's future IS needs;

  • recognition and avoidance of management and technology fads; and

  • inclusion of sustainability principles in IS governance plans.

These principles in turn correspond to established elements of decision-making in the IS life-cycle: strategic planning; infrastructure acquisition; business analysis and system specification; feasibility study; project planning; system design and development; and implementation and change management.

In recent years sustainability in IS has often been linked to the situation in developing countries. However, the negative impact potential on those organisations which habitually fail to take full account of the issue in the life-cycle of their systems is such, the authors conclude, that the time has come to transcend the narrow confines of research with serves the traditional business pursuit of cost savings and productivity gains, and instead to embrace with them the quest to achieve a model of sustainability-maturity in IS.

Standard Method Use in Contemporary IS Development: An Empirical Investigation

Laurie McLeod, Stephen MacDonell and Bill Doolin, in Journal of Systems and Information Technology, vol. 9 (2007) issue 1, pp. 6-29

This research analyses the results of web-based survey of organisations with 200 or more full-time employees, and which had conducted IS projects in the preceding three years, in order to measure the extent to which standard methods of IS development either persist within New Zealand or have been superseded by more recent practices reported to be in use elsewhere. Of the 101 qualifying organisations that took part, 92 had made use of the standard method and, of these, 80 provided usable detailed responses (N.B. By "standard method" is meant one, be it either a pure or adapted commercial method or in-house, which is applied by an organisation relatively consistently to all IS development projects. No attempt was made to gauge independently from the published findings of others, the prevalence non-NZ organisations deviate from the norms outlined below.).

In broad terms, the answers showed that: 51 per cent chose their standard methods due to organisational policy or familiarity; 15 per cent gave ease of use or quality of support as the reason for their choice; 14 per cent cited the developer's familiarity with the method. Perhaps surprisingly, only 16 per cent made their choice of method because its characteristics offered the best fit with those of the particular project. This last finding confirms that of research from the mid-1990s, which revealed that very often a given methodology was chosen solely on account of familiarity without regard to any relevant technical advantages (or disadvantages).

When questioned about the reasons for standard method use, respondents felt most strongly that is facilitated successful IS development, followed closely by ensuring that the system met user requirements. Project control and product quality also featured highly, as did issues pertaining to communication between developers and users. Least agreement was expressed regarding the method's positive impact on the project team's productivity. Though still a minority, the strongest negative factors were perceived to be that standard methods were cumbersome to learn, use and adapt, and tended to constrain creativity. As regards future IS development, most (70 per cent) expected to see more use of standard methods as against a mere 1 per cent envisaging less, while those who anticipated the introduction of a wider range of methods accounted for only 9 per cent.

All in all, the results appeared fairly consistent with other empirical studies, which predict that standard method use is more likely to increase than the opposite. One interesting finding was related to the growing preference amongst some of project management methodologies in IS development, because they were seen as facilitating the inclusion of non-IS specialists in project teams. Other factors hinted at in the course of analysis included that the high and increasing use of standard methods may be related to the growing level of offshore ownership of New Zealand businesses, with the consequent use of systems brought in from parent companies, and the need for local customisation. This in turn suggests a move away from development management towards deployment management.

Although there is scope for further research to uncover the reasons for variations in the how and why of standard method use, it is clear that the corporate sector in New Zealand has yet to heed the various criticisms to be found of its use in IS literature in recent years.

Computer-Based Learning Enhanced by Surprise: An Evolutionary Psychological Model

Ned Kock, in Journal of Systems and Information Technology, vol. 9 (2007) issue 1, pp. 30-45

Computer-based learning has existed for many years. There is now a wealth of courses enabling greatly enhanced flexibility in terms of when and where one can study. However, a general attitude persists that it offers no significant benefits over face-to-face communication between teacher and student. At the root of the argument against computer-mediated modes of instruction lies media richness theory, which considers the effects of ICT on human behaviour.

This research challenges the grounds for unfavourable attitudes towards increased use of computer-based learning on the basis of an analysis of the "flashbulb memorization" phenomenon from the perspective of evolutionary psychology. An example of a flashbulb memory would be the ability to recall, often in detail, what one was doing, what the weather was like, etc., the moment one learned of a surprise event, for example that President Kennedy had been shot, that the Princess Diana had died or, in the case one study in Japan, a nuclear accident in 1999. Two main explanations of the phenomenon are recognised: the rehearsal interpretation, and the Darwinian interpretation.

The rehearsal interpretation is based on the assumption that the reason why some people retain memories of details long after a surprising event is because they have gone over it time and again in their minds. For proponents of this interpretation, the fact that there is little observable difference in emotional reaction and self-reported surprise between those with, and those without, vivid and accurate recall supports a conclusion that the significant factor was the frequent remembering of the event.

The Darwinian explanation rests instead on the notion of an inherited behavioural trait in human beings, acquired when our ancestors lived in the environment of evolutionary adaptation, in other words "in the wild". Clearly, for a hunter-gatherer, an ability to predict when and where one's life might be in danger from predators could prove vital; and this ability depends on recognising similarities between where one is and a previous life-threatening situation, for example a surprise attack by a woolly mammoth or a poisonous snake. The underlying premise is that being surprised sets off an instinctive process of preferential memorisation. This in turn enhances self-preservation, helping one either to avoid, or to be more cautious in, similar circumstances in the future.

Based on this Darwinian premise, the article suggests that the introduction of surprises, such as video clips of an aggressive animal, at the end of selected modules during a period of study, one can exploit an evolutionary predisposition in humans. Provoking a feeling of sudden threat will prompt the brain to memorise more vividly and accurately for future recall what led up to that moment.

The next stage of this research is to move from the hypothesis to conducting tests on human subjects. Results of the test will show whether or not computer-based learning enhanced by surprise (CLEBS) provides a more effective model. If so, it should also be possible to measure to what extent any positive benefits can be felt over time, for example: Is one's ability to recall what has been learned improved after as well as before the surprise? And how long before or after the surprise does the enhanced memorisation effect last? 5, 10, 15 min? Are any learning benefits reduced by overexposure to repeated instances of the same video-clip surprise?

Roderic VassieHead of Publishing at Microform Academic Publishers, Wakefield, UK. He was formerly curator at the British Library, selection officer at the Library of Congress and bibliographic coordinator at the UAE University.

Related articles