Search results
1 – 10 of over 41000Megan Miller and Volker Hegelheimer
Despite their motivational appeal to learners, innovative and technologically advanced computer simulation games targeting native English speakers frequently remain beyond the…
Abstract
Despite their motivational appeal to learners, innovative and technologically advanced computer simulation games targeting native English speakers frequently remain beyond the competence of ESL learners as independent didactic tools. Guided by Chapelle’s (2001) criteria for determining CALL task appropriateness, this paper illustrates how the popular authentic simulation, The SIMs, can be adapted to enhance vocabulary learning through supporting materials. Adult ESL learners completed a five‐week unit, experiencing different conditions of supplemental materials while completing tasks using The SIMs. The participants received mandatory supplemental materials in one condition, voluntary access to supplemental materials in the second, and no supplemental materials in the third. The results indicate a statistically significant increase in vocabulary acquisition for the first condition. Student feedback suggests the supplemental materials were beneficial for successful task completion. Thus, how authentic computer simulation tasks are structured and supported appears to have a considerable bearing on the appropriateness of the task.
Details
Keywords
The goal of a good computer interface is to provide a natural language help facility that allows new users to learn about the computer, its operating system in particular, and the…
Abstract
The goal of a good computer interface is to provide a natural language help facility that allows new users to learn about the computer, its operating system in particular, and the important packages that are available for his use. The UNIX Consultant (UC) is an intelligent natural language interface designed to allow naive users to communicate with the UNIX operating system (of A.T. and T. Bell Laboratories) in ordinary English in as painless a way as possible. UC allows the user to engage in natural language dialogues with the operating system. He can query UC about how to do things in UNIX, ask about common names, formats, receive on‐line definitions of UNIX and get help debugging problems using UNIX commands.
Describes intuitively the fact that four types of formal languages can be generated by four types of grammars or can be recognized by four types of automata. Gives the…
Abstract
Describes intuitively the fact that four types of formal languages can be generated by four types of grammars or can be recognized by four types of automata. Gives the relationships between context‐sensitive languages and computer programming languages. Defines and investigates parallel productions, parallel grammars, and context‐free parallel grammars. Shows that context‐sensitive languages exist which can be generated by context‐free parallel grammars. In addition, states the advantages of context‐free parallel grammars. Also shows that context‐free languages (CFL) are a proper subset of context‐free parallel languages (CFPL). Furthermore, CFPL is a more effective tool for modelling computer programming languages than CFL, especially for parallel computer programming languages, for example, the ADA programming language. Also illustrates context‐sensitive property of recognizing hand‐written characters. The results may have useful applications in artificial intelligence, model parallel computer programming languages, software engineering, expert systems and robotics.
Details
Keywords
Speculations on the possibility of computers displaying intelligence are usually traced to Turing's 1950 paper, ‘Computing machinery and intelligence’. Claims for the literal…
Abstract
Speculations on the possibility of computers displaying intelligence are usually traced to Turing's 1950 paper, ‘Computing machinery and intelligence’. Claims for the literal intelligence of an appropriately programmed computer were publicly refuted by Searle in 1980. Optimism about the adequate simulation of intelligence is now further diminished. Analogies between the computer and the brain or mind have persisted. A contrasting perspective which links computers with documents through writing and through the faculty for constructing socially shared systems of signs has also been developed. From this perspective it can be shown that (i) claims for the literal intelligence of a computer rest on a similar basis to claims for the intelligence of a document, the production of depersonalised linguistic output, and (ii) that such claims are subject to an identical objection, that linguistic output is made available without a prior act of comprehension by the artefact. This paper places the Turing test in its intellectual and historical context. A claim that written words can give the appearance of intelligence, without the human capacity for dialectic response, is found in Plato's Phaedrus. This, too, must be placed in its historical context of a transition from predominantly oral to oral and written communication. Demonstrating that there are extensive similarities between the claims of computers and documents to literal intelligence is part of a progressive demystification of the computer.
Proposes that while it is not unfeasible that we will be able to communicate directly with computers in ordinary language, it is highly unlikely that this will ever be achieved…
Abstract
Proposes that while it is not unfeasible that we will be able to communicate directly with computers in ordinary language, it is highly unlikely that this will ever be achieved. As Heidegger pointed out, language is not normally used for the exchange of information but calls to attention some aspect of the world the language users already share. Without this experience of the world, computers are unable to place language in context. Moreover, humans continue to develop and experience, so the context for language is ever changing. This is the challenge for artificial intelligence researchers. Also discusses the practical consequences of this fundamental problem, current solutions and their social implications and dangers.
Details
Keywords
– The purpose of this paper is to provide guidance to librarians about whether to keep or withdraw books on pre-Internet computer programming languages.
Abstract
Purpose
The purpose of this paper is to provide guidance to librarians about whether to keep or withdraw books on pre-Internet computer programming languages.
Design/methodology/approach
For each of the programming languages considered, this article provides historical background and an assessment of current academic library collection needs.
Findings
Many older languages (COBOL, FORTRAN, C, Lisp, Prolog, and Ada) are still in use and need reliable sources available for reference. Additionally, books about obsolete languages have educational value due to their influence on the development on newer languages such as C++ and Java.
Practical applications
This information will be useful to academic librarians who want to make the best choices about keeping or withdrawing computer programming books.
Originality/value
Most librarians responsible for managing computer science collections do not have a computer programming background, so they do not know which older languages are still important.
Details
Keywords
A review is made of research in the design and use of languages for computer programming, with emphasis on one of its specialized applications, information retrieval, and on the…
Abstract
A review is made of research in the design and use of languages for computer programming, with emphasis on one of its specialized applications, information retrieval, and on the end‐user directly interacting with the computer. It is shown that we lack the criteria for determining what is the best fit of a language to a particular user, although some research has suggested that there is a means by which this can be done, offering the users a wide range of language choice and using the computer as a means of helping him learn and adapt to the language.
This article is a contribution to the development of a comprehensive interdisciplinary theory of LIS in the hope of giving a more precise evaluation of its current problems. The…
Abstract
This article is a contribution to the development of a comprehensive interdisciplinary theory of LIS in the hope of giving a more precise evaluation of its current problems. The article describes an interdisciplinary framework for lis, especially information retrieval (IR), in a way that goes beyond the cognitivist ‘information processing paradigm’. The main problem of this paradigm is that its concept of information and language does not deal in a systematic way with how social and cultural dynamics set the contexts that determine the meaning of those signs and words that are the basic tools for the organisation and retrieving of documents in LIS. The paradigm does not distinguish clearly enough between how the computer manipulates signs and how librarians work with meaning in practice when they design and run document mediating systems. The ‘cognitive viewpoint’ of Ingwersen and Belkin makes clear that information is not objective, but rather only potential, until it is interpreted by an individual mind with its own internal mental world view and purposes. It facilitates further study of the social pragmatic conditions for the interpretation of concepts. This approach is not yet fully developed. The domain analytic paradigm of Hjørland and Albrechtsen is a conceptual realisation of an important aspect of this area. In the present paper we make a further development of a non‐reductionistic and interdisciplinary view of information and human social communication by texts in the light of second‐order cybernetics, where information is seen as ‘a difference which makes a difference’ for a living autopoietic (self‐organised, self‐creating) system. Other key ideas are from the semiotics of Peirce and also Warner. This is the understanding of signs as a triadic relation between an object, a representation and an interpretant. Information is the interpretation of signs by living, feeling, self‐organising, biological, psychological and social systems. Signification is created and con‐trolled in a cybernetic way within social systems and is communicated through what Luhmann calls generalised media, such as science and art. The modern socio‐linguistic concept ‘discourse communities’ and Wittgenstein's ‘language game’ concept give a further pragmatic description of the self‐organising system's dynamic that determines the meaning of words in a social context. As Blair and Liebenau and Backhouse point out in their work it is these semantic fields of signification that are the true pragmatic tools of knowledge organ‐isation and document retrieval. Methodologically they are the first systems to be analysed when designing document mediating systems as they set the context for the meaning of concepts. Several practical and analytical methods from linguistics and the sociology of knowledge can be used in combination with standard methodology to reveal the significant language games behind document mediation.
The flexibility of a robot system comes from its ability to be programmed. How the robot is programmed is a main concern of all robot users. A good mechanical arm can be…
Abstract
The flexibility of a robot system comes from its ability to be programmed. How the robot is programmed is a main concern of all robot users. A good mechanical arm can be underutilized if it is too difficult to program. The introduction of the Universal Robot Controller™ (URC) has made the possibility of a standard, easy to use, robot programming language a reality. The URC is an open‐architecture, PC‐based robot controller. It will work with virtually any robot and gives the user increased flexibility and capabilities over the standard OEM controllers. The URC uses Windows NT as its operating system. The URC is the ideal platform for a universal robot programming language, RobotScript. It allows one robot language to run all robots in a factory.
Some research has indicated that interacting with a computer through a natural language is not easier, more effective, or even preferred by new users of a system, when compared…
Abstract
Some research has indicated that interacting with a computer through a natural language is not easier, more effective, or even preferred by new users of a system, when compared with menu and command interfaces. However, command‐driven searches would be facilitated, as users move from system to system, if there existed a common command language. Proposed standards for a “Common Command Language for Online, Interactive Information Retrieval” have been developed by NISO Committee G, and are now being reviewed for adoption.