Search results

1 – 10 of over 11000
Article
Publication date: 8 May 2018

Nabila Ahmed Khodeir, Hanan Elazhary and Nayer Wanas

The purpose of this paper is to present an algorithm to generate story problems via controlled parameters in the domain of mathematics. The generation process is performed in the…

Abstract

Purpose

The purpose of this paper is to present an algorithm to generate story problems via controlled parameters in the domain of mathematics. The generation process is performed in the problem generation module in the context of an intelligent tutoring system suggested in this paper. Controlling the question parameters allows for adapting the generated questions according to the specific student needs. Story problems are selected since they are one of the most important types of problems in mathematics, as they help train students to apply their knowledge to real-world problems. Such problems target improving different student’s skills including literacy skills through reading the problem, recognizing the embedded mathematical information, and applying the required arithmetic operators.

Design/methodology/approach

Natural language generation (NLG) techniques are used to control the difficulty level of the generated story problem header in addition to effecting variations from the natural language point of view. The proposed NLG technique is based on different separated knowledge categories to provide flexibility in the generation process and allow porting the module to other contexts, domains, and to other natural languages without a complete redesign.

Findings

The approach has been empirically evaluated, and the results show that the generated problems are sound, clear, and naturally readable. This is in addition to the usability of the tutoring system itself.

Research limitations/implications

The generation technique is confined to the problem described using rhetorical schemas. Nevertheless, it can generate any problem provided that the rhetorical schema is available.

Originality/value

Most story problems generation systems limit the variation of the story problems to formulating the sentences that describe the story problem and the associated mathematical operations. In contrast, this paper presents a story problems generation technique that allows variations in the structure of the narrative story as well as the context, sentences, wordings, and mathematical operations. This variability allows assessing different student skills along different dimensions with gradually increasing difficulty levels.

Details

The International Journal of Information and Learning Technology, vol. 35 no. 3
Type: Research Article
ISSN: 2056-4880

Keywords

Article
Publication date: 4 October 2019

Preeti Mulay, Sangeeta Paliwal, Venkatesh Iyengar, Samaya Pillai and Ashwini Rao

Advancements in open source, free integrated library management system (LMS) for cataloging, circulation, flexible reporting and automated library services especially in academic…

Abstract

Purpose

Advancements in open source, free integrated library management system (LMS) for cataloging, circulation, flexible reporting and automated library services especially in academic communities has gained extreme importance. The purpose of this study is to provide solution to a distinct problem about automatic generation of multiple copies for unique titles leading to title mismatch and duplication in biblio-records related to university collection of books. The aim of this paper is to provide solution to generate the unique titles report in any large size university library using KOHA, without loss of accession history or empirical data. This paper also demonstrates the smooth transition from one library software to KOHA.

Design/methodology/approach

The case university is considered here as a giant entity having huge collection of reading material, along with multiple institutes affiliations. The study demonstrates a step-by-step trial-and-error method involving several iterations detecting root cause, implementing corrective actions and finally resolving the problem of data redundancy and duplication of records. Currently, KOHA’s user manual does not provide any solution to this problem. The authors believe that this paper will enable various practitioners of KOHA-LMS toward understanding and appreciating the quality of library information/records being managed in delivering quality services to all its users and stakeholders. The methodology used in this work is KOHA’s open access platform, and the existing LMS, for generating unique titles report. The Microsoft’s Excel format, pivot table approach, Libsuite software, SQL queries for KOHA, databases, cloud-based system platform, etc. approaches are used to successfully achieve the unique title report of print books in the university library.

Findings

This paper provides the solution about how to generate a complete and correct unique title report for all print books of the university. The preventive measures related to generation of unique titles when influx of new books or adding new institute(s) under the university are required.

Research limitations/implications

The focus of the work discussed here is limited to generating correct report of unique titles using KOHA related to only print books of a university having multiple institutes affiliated to it.

Practical implications

This paper gives a constructive solution for generation of the unique titles report using KOHA, practically useful for any university or to the institute who wish to use KOHA, one of the open source software used worldwide for libraries.

Originality/value

This paper fulfills an identified need to study how to generate unique titles report related to print books of the university library. To the best of the authors’ knowledge, there exists no such case study from available knowledge base/literature on the topic of interest and particularly focusing on the multiple copies data redundancy problem of KOHA-LMS.

Details

Library Hi Tech News, vol. 36 no. 8
Type: Research Article
ISSN: 0741-9058

Keywords

Article
Publication date: 1 December 1999

Ruth Aylett, Gary Petley, P.W.H. Chung, James Soutter and Andrew Rushton

Operating procedure synthesis (OPS) has been used to generate plant operating procedures for chemical plants. However, the application of AI planning to this domain has been…

1220

Abstract

Operating procedure synthesis (OPS) has been used to generate plant operating procedures for chemical plants. However, the application of AI planning to this domain has been rarely considered, and when it has the scope of the system used has limited it to solving “toy” problems. This paper describes the application of state‐of‐the‐art AI planning techniques to the generation of operating procedures for chemical plant as part of the INT‐OP project at the Universities of Salford and Loughborough. The CEP planner is outlined and its application to a double effect evaporator test rig is discussed in detail. Particular attention is paid to the issues involved in domain modelling, requiring the description of the domain, development of AI planning operators, the definition of safety restrictions, and the definition of the problem. There is then a presentation of the results, lessons learned and problems still remaining.

Details

Integrated Manufacturing Systems, vol. 10 no. 6
Type: Research Article
ISSN: 0957-6061

Keywords

Article
Publication date: 15 March 2022

Ranjitha K., Sivakumar P. and Monica M.

This study aims to implement an improved version of the Chimp algorithm (IChimp) for load frequency control (LFC) of power system.

Abstract

Purpose

This study aims to implement an improved version of the Chimp algorithm (IChimp) for load frequency control (LFC) of power system.

Design/methodology/approach

This work was adopted by IChimp to optimize proportional integral derivative (PID) controller parameters used for the LFC of a two area interconnected thermal system.

Findings

The supremacy of proposed IChimp tuned PID controller over Chimp optimization, direct synthesis-based PID controller, internal model controller tuned PID controller and recent algorithm based PID controller was demonstrated.

Originality/value

IChimp has good convergence and better search ability. The IChimp optimized PID controller is the proposed controlling method, which ensured better performance in terms of converging behaviour, optimizing controller gains and steady-state response.

Details

COMPEL - The international journal for computation and mathematics in electrical and electronic engineering , vol. 41 no. 5
Type: Research Article
ISSN: 0332-1649

Keywords

Article
Publication date: 1 August 2004

San‐Yih Hwang and Shi‐Min Chuang

In a large‐scale digital library, it is essential to recommend a small number of useful and related articles to users. In this paper, a literature recommendation framework for…

Abstract

In a large‐scale digital library, it is essential to recommend a small number of useful and related articles to users. In this paper, a literature recommendation framework for digital libraries is proposed that dynamically provides recommendations to an active user when browsing a new article. This framework extends our previous work that considers only Web usage data by utilizing content information of articles when making recommendations. Methods that make use of pure content data, pure Web usage data, and both content and usage data are developed and compared using the data collected from our university's electronic thesis and dissertation (ETD) system. The experimental results demonstrate that content data and usage data are complements of each other and hybrid methods that take into account of both types of information tend to achieve more accurate recommendations.

Details

Online Information Review, vol. 28 no. 4
Type: Research Article
ISSN: 1468-4527

Keywords

Article
Publication date: 1 September 2001

Nick Poole

The purpose of this article is to examine the existing tools and guidance available to museums, archives and libraries, and then to consider new technologies such as accessible…

Abstract

The purpose of this article is to examine the existing tools and guidance available to museums, archives and libraries, and then to consider new technologies such as accessible Portable Document Format files and additional modules for existing web development software. The article reviews current tools, standards and guidelines in accessibility such as WAI, RNIB Digital Access Campaign, Information Age Government Champions guidelines, Bobby validator, Access Adobe and the Macromedia Dreamweaver Accessibility Extension. Two Case Studies concerning accessibility are included.

Details

VINE, vol. 31 no. 3
Type: Research Article
ISSN: 0305-5728

Article
Publication date: 1 December 2001

Jaroslav Mackerle

Gives a bibliographical review of the finite element meshing and remeshing from the theoretical as well as practical points of view. Topics such as adaptive techniques for meshing…

1896

Abstract

Gives a bibliographical review of the finite element meshing and remeshing from the theoretical as well as practical points of view. Topics such as adaptive techniques for meshing and remeshing, parallel processing in the finite element modelling, etc. are also included. The bibliography at the end of this paper contains 1,727 references to papers, conference proceedings and theses/dissertations dealing with presented subjects that were published between 1990 and 2001.

Details

Engineering Computations, vol. 18 no. 8
Type: Research Article
ISSN: 0264-4401

Keywords

Article
Publication date: 28 August 2009

Manuel Wimmer

The definition of modeling languages is a key‐prerequisite for model‐driven engineering. In this respect, Domain‐Specific Modeling Languages (DSMLs) defined from scratch in terms…

Abstract

Purpose

The definition of modeling languages is a key‐prerequisite for model‐driven engineering. In this respect, Domain‐Specific Modeling Languages (DSMLs) defined from scratch in terms of metamodels and the extension of Unified Modeling Language (UML) by profiles are the proposed options. For interoperability reasons, however, the need arises to bridge modeling languages originally defined as DSMLs to UML. Therefore, the paper aims to propose a semi‐automatic approach for bridging DSMLs and UML by employing model‐driven techniques.

Design/methodology/approach

The paper discusses problems of the ad hoc integration of DSMLs and UML and from this discussion a systematic and semi‐automatic integration approach consisting of two phases is derived. In the first phase, the correspondences between the modeling concepts of the DSML and UML are defined manually. In the second phase, these correspondences are used for automatically producing UML profiles to represent the domain‐specific modeling concepts in UML and model transformations for transforming DSML models to UML models and vice versa. The paper presents the ideas within a case study for bridging ComputerAssociate's DSML of the AllFusion Gen CASE tool with IBM's Rational Software Modeler for UML.

Findings

The ad hoc definition of UML profiles and model transformations for achieving interoperability is typically a tedious and error‐prone task. By employing a semi‐automatic approach one gains several advantages. First, the integrator only has to deal with the correspondences between the DSML and UML on a conceptual level. Second, all repetitive integration tasks are automated by using model transformations. Third, well‐defined guidelines support the systematic and comprehensible integration.

Research limitations/implications

The paper focuses on the integrating direction DSMLs to UML, but not on how to derive a DSML defined in terms of a metamodel from a UML profile.

Originality/value

Although, DSMLs defined as metamodels and UML profiles are frequently applied in practice, only few attempts have been made to provide interoperability between these two worlds. The contribution of this paper is to integrate the so far competing worlds of DSMLs and UML by proposing a semi‐automatic approach, which allows exchanging models between these two worlds without loss of information.

Details

International Journal of Web Information Systems, vol. 5 no. 3
Type: Research Article
ISSN: 1744-0084

Keywords

Article
Publication date: 1 June 2003

Jaroslav Mackerle

This paper gives a bibliographical review of the finite element and boundary element parallel processing techniques from the theoretical and application points of view. Topics…

1205

Abstract

This paper gives a bibliographical review of the finite element and boundary element parallel processing techniques from the theoretical and application points of view. Topics include: theory – domain decomposition/partitioning, load balancing, parallel solvers/algorithms, parallel mesh generation, adaptive methods, and visualization/graphics; applications – structural mechanics problems, dynamic problems, material/geometrical non‐linear problems, contact problems, fracture mechanics, field problems, coupled problems, sensitivity and optimization, and other problems; hardware and software environments – hardware environments, programming techniques, and software development and presentations. The bibliography at the end of this paper contains 850 references to papers, conference proceedings and theses/dissertations dealing with presented subjects that were published between 1996 and 2002.

Details

Engineering Computations, vol. 20 no. 4
Type: Research Article
ISSN: 0264-4401

Keywords

Article
Publication date: 14 October 2013

Preben Hansen, Anni Järvelin and Antti Järvelin

– This study aims to examine manually formulated queries and automatic query generation in an early phase of a patent “prior art” search.

Abstract

Purpose

This study aims to examine manually formulated queries and automatic query generation in an early phase of a patent “prior art” search.

Design/methodology/approach

The study was performed partly within a patent domain setting, involving three professional patent examiners, and partly in the context of the CLEF 2009 Intellectual Property (CLEF-IP) track. For the exploratory study of user-based query formulation, three patent examiners performed the same three simulated real-life patent tasks. For the automatic query generation, a simple term-weighting algorithm based on the RATF formula was used. The manually and automatically created queries were compared to analyse what kinds of keywords and from which parts of the patent documents were selected.

Findings

For user-formulated queries, it was found that patent documents were read in a specific order of importance and that the time varied. Annotations and collaboration were made while reading and selecting/ranking terms. Ranking terms was experienced to be harder than selecting terms. For the automatic formulated queries, it was found that the term frequencies used in the RATF alone will not quite approximate what terms will be judged as relevant query terms by the users. Simultaneously, the results suggest that developing a query generation tool for generating initial queries based on patent documents is feasible.

Research limitations/implications

These preliminary but informative results need to be viewed in the light that only three patent experts were observed and that a small set of topics was used.

Originality/value

It is usually difficult to get access to the setting of the patent domain and the results of the study show that the methodology provided a feasible way to study manual and the manual query formulation of the patent engineer.

1 – 10 of over 11000