Search results
21 – 30 of over 9000In these heady days of spreadsheet programs, word processors, and database management software, we are confronted with a simple problem of software specialization: how to get that…
Abstract
In these heady days of spreadsheet programs, word processors, and database management software, we are confronted with a simple problem of software specialization: how to get that nice 10‐column chart of next year's book budget from the Lotus 1–2–3 program into the budget report being written on Microsoft Word?
Ozlem Gemici Gunes and A. Sima Uyar
The purpose of this paper is to propose parallelization of a successful sequential ant‐based clustering algorithm (SABCA) to increase time performance.
Abstract
Purpose
The purpose of this paper is to propose parallelization of a successful sequential ant‐based clustering algorithm (SABCA) to increase time performance.
Design/methodology/approach
A SABCA is parallelized through the chosen parallelization library MPI. Parallelization is performed in two stages. In the first stage, data to be clustered are divided among processors. After the sequential ant‐based approach running on each processor clusters the data assigned to it, the resulting clusters are merged in the second stage. The merging is also performed through the same ant‐based technique. The experimental analysis focuses on whether the implemented parallel ant‐based clustering method leads to a better time performance than its fully sequential version or not. Since the aim of this paper is to speedup the time consuming, but otherwise successful, ant‐based clustering method, no extra steps are taken to improve the clustering solution. Tests are executed using 2 and 4 processors on selected sample datasets. Results are analyzed through commonly used cluster validity indices and parallelization performance metrices.
Findings
As a result of the experiments, it is seen that the proposed algorithm performs better based on time measurements and parallelization performance metrices; as expected, it does not improve the clustering quality based on the cluster validity indices. Furthermore, the communication cost is very small compared to other ant‐based clustering parallelization techniques proposed so far.
Research limitations/implications
The use of MPI for the parallelization step has been very effective. Also, the proposed parallelization technique is quite successful in increasing time performance; however, as a future study, improvements to clustering quality can be made in the final step where the partially clustered data are merged.
Practical implications
The results in literature show that ant‐based clustering techniques are successful; however, their high‐time complexity prohibit their effective use in practical applications. Through this low‐communication‐cost parallelization technique, this limitation may be overcome.
Originality/value
A new parallelization approach to ant‐based clustering is proposed. The proposed approach does not decrease clustering performance while it increases time performance. Also, another major contribution of this paper is the fact that the communication costs required for parallelization is lower than the previously proposed parallel ant‐based techniques.
Details
Keywords
Vincent Flifli, Peter Adebola Okuneye and Dare Akerele
The purpose of this paper is to study an innovative rice value chain financing system (VCFS) established in Benin, to identify the determinants of producers and processors access…
Abstract
Purpose
The purpose of this paper is to study an innovative rice value chain financing system (VCFS) established in Benin, to identify the determinants of producers and processors access to formal credit, both at intensive and extensive margins. It focuses on multi-stakeholder platforms (MSP) which connect producers and processors in need of credit to potential financial lenders.
Design/methodology/approach
The empirical analysis uses rich cross-sectional survey data collected in Northern Benin in 2018. The sample consists of 215 rice producers and 217 rice processors randomly selected through a multi-stage sampling and interviewed with structured questionnaires. The empirical models analyze the determinants of the likelihood to receive a credit and the amount of credit received. To account for the sample selection and censored nature of the main outcome variable, the study considers a Heckman two-stage model coupled with a Tobit model for robustness checks.
Findings
The study finds that the MSP are effective in increasing access to formal credit and the amount borrowed. Producers and processors who are members of the MSP are more likely to receive credit and, conditional on being approved for credit borrower, a larger amount. Other key factors that significantly explain access to credit include the use of soft guarantee for securing a loan, the degree of participation in the platform and demographic characteristics. These findings are consistent across the Heckman and Tobit models.
Research limitations/implications
The study attempts to rigorously analyze the factors explaining producers and processors access to credit using cross-sectional survey data. But it has some limitations. The main limitation is the type of data used. Ideally, one would like to run a randomized control trial (RCT) to randomly assign participation in the MSP to causally estimate its impact of access to credit. The second-best option would be to have a panel data covering the period before and after the establishment of the platform. However, in the absence of an RCT or panel data, the study resorts to cross-sectional data and empirical models that account for sample selection bias and the censored nature of the credit received.
Practical implications
One of the key findings of the study is that participation in the MSP (through different value chain stages associations) increases access to formal credit. This highlights an important and effective mechanism, a well-coordinated value chains that integrated lenders, that policymakers can leverage to facilitate access to credit in the agricultural sector.
Social implications
Access to credit is important to boost agricultural productivity and income. Hence, the findings of the study have social implications in terms of poverty reduction in rural areas.
Originality/value
The study contributes to earlier theories and empirical studies on the demand for credit. It focuses on an innovative VCFS, increasingly adopted in many developing countries, adds originality and value to the understanding of mechanisms to unlock agricultural actors’ access to credit in low-income countries.
Details
Keywords
Tanvir Habib Sardar and Ahmed Rimaz Faizabadi
In recent years, there is a gradual shift from sequential computing to parallel computing. Nowadays, nearly all computers are of multicore processors. To exploit the available…
Abstract
Purpose
In recent years, there is a gradual shift from sequential computing to parallel computing. Nowadays, nearly all computers are of multicore processors. To exploit the available cores, parallel computing becomes necessary. It increases speed by processing huge amount of data in real time. The purpose of this paper is to parallelize a set of well-known programs using different techniques to determine best way to parallelize a program experimented.
Design/methodology/approach
A set of numeric algorithms are parallelized using hand parallelization using OpenMP and auto parallelization using Pluto tool.
Findings
The work discovers that few of the algorithms are well suited in auto parallelization using Pluto tool but many of the algorithms execute more efficiently using OpenMP hand parallelization.
Originality/value
The work provides an original work on parallelization using OpenMP programming paradigm and Pluto tool.
Details
Keywords
A simplified command line reader suitable for building interactive programs based on Fortran 77 is presented. The reader is a 20:1 condensation of the comprehensive command…
Abstract
A simplified command line reader suitable for building interactive programs based on Fortran 77 is presented. The reader is a 20:1 condensation of the comprehensive command language interpreter CLIP that supports the NICE integrated software system. The present reader, called TinyCLIP. has been prepared to illustrate basic elements of command‐driven programming for beginners interested in writing Fortran‐based interactive applications software. The reader is table driven. Data lines given to the reader are parsed into items which are stored in a table. The table can be subsequently accessed by the program that processes the commands. Tools for extending the basic language are briefly discussed and specific extensions suggested.
Although there are many articles on CDS/ISIS which describe projects in which conversion techniques are used, there is very little literature about the conversion programs…
Abstract
Although there are many articles on CDS/ISIS which describe projects in which conversion techniques are used, there is very little literature about the conversion programs themselves.This paper provides an overview of the conversion process and gives brief descriptions of some of the available programs.
Details
Keywords
This paper addresses the concept of automated library systems of the 1980s as a marriage of traditional bibliographic transaction processing applications and those now emerging…
Abstract
This paper addresses the concept of automated library systems of the 1980s as a marriage of traditional bibliographic transaction processing applications and those now emerging under the rubric of the advanced office system. This is the concept of CESS—the Comprehensive Electronic Service System for the library or information center. The basis of CESS will be a distributed data processing system eventually linking the local library, via computer to computer communication, to institutional parent, regional and national level systems and their associated services. Functional application distribution for this system is discussed with Computer Consoles, Inc., Office Power and Prime Computer, Inc., Prime Office Automation System (POAS) used to show the office automation capabilities and their integration aspects with online bibliographic systems for the library. Present and near term solutions to creating CESS concept systems are presented.
StarLogo is a computer modeling tool that empowers students to understand the world through the design and creation of complex systems models. StarLogo enables students to program…
Abstract
StarLogo is a computer modeling tool that empowers students to understand the world through the design and creation of complex systems models. StarLogo enables students to program software creatures to interact with one another and their environment, and study the emergent patterns from these interactions. Building an easy‐to‐understand, yet powerful tool for students required a great deal of thought about the design of the programming language, environment, and its implementation. The salient features are StarLogo's great degree of transparency (the capability to see how a simulation is built), its support to let students create their own models (not just use models built by others), its efficient implementation (supporting simulations with thousands of independently executing creatures on desktop computers), and its flexible and simple user interface (which enables students to interact dynamically with their simulation during model testing and validation). The resulting platform provides a uniquely accessible tool that enables students to become full‐fledged practitioners of modeling. In addition, we describe the powerful insights and deep scientific understanding that students have developed through the use of StarLogo.
Details
Keywords
Jivan Shrikrishna Parab, Rupesh Sadanand Paliekar Porob, Kottanal Roy Francis Joseph, Kunal Vishwanath Naik, Rajanish K. Kamat and Gourish M. Naik
Aims to design a heterogeneous embedded system with CPLD and microcontroller as co‐processors sharing a memory module.
Abstract
Purpose
Aims to design a heterogeneous embedded system with CPLD and microcontroller as co‐processors sharing a memory module.
Design/methodology/approach
The system receives external analog input signal, which is applied to the PIC 16F73 microcontroller. Upon converting the data in to digital format using the on‐chip ADC, the PIC stores the digitized version in the SRAM (HCM 6264) chip. SRAM HCM 6264 has been used as a shared memory model, of which both the PIC and CPLD can access all the locations. Once the PIC passes controls to the CPLD, the further processing is carried out by the CPLD without any intervention of the PIC. This is a true example of co‐processing of the architecturally diversified computing modules from completely different vendors with totally different programming suits.
Findings
The board has been tested with IC temperature sensors and also found to be useful for sensor array applications involving three types of processing viz. analog (through instrumentation amplifier), real‐time digital (through microcontroller) and customized reconfigurable digital (with the CPLD).
Practical implications
The system has several potential applications in avionics, military and robotic embedded systems, which have inherent real‐time constraints that need to be supported by the underlying hardware and driver programs.
Originality/value
Discusses the rare and unique combination of diversified processing core to build an embedded system.
Details
Keywords
Information is the most valuable but least valued tool the scientist/engineer has. The computer program developments based on the finite element and boundary element techniques is…
Abstract
Information is the most valuable but least valued tool the scientist/engineer has. The computer program developments based on the finite element and boundary element techniques is now receiving considerable attention of the engineering community. The same is valid for their satellite programs such as pre‐ and post‐processors. Recently also expert systems are being developed in the field of structural mechanics. There are thousands of different programs in use and new ones are continuously being developed. Output of related literature on the finite element (FE) and boundary element (BE) technology has grown at a prodigious rate in the last two decades. An effective retrieval of information is necessary, but this is impossible without computer assistance. MAKEBASE is a special purpose, menu‐driven database which stores all types of information listed above. The development of this database was started five years ago. Today, MAKEBASE contains information about 1600 different FE/BE programs and more than 30,000 literature references. It is updated on a daily basis. MAKEBASE is implemented on VAX 11/780 (VMS), Apollo workstations, and different subjects of literature references have also been transferred into the microcomputer environment for individual PC databases. This paper describes the latest version of MAKEBASE and outlines the philosophy for the use of individual, special‐tailored micro‐databases for PCs.