Search results

1 – 10 of over 2000
Content available
Article
Publication date: 1 June 1998

John Galletly

72

Abstract

Details

Kybernetes, vol. 27 no. 4
Type: Research Article
ISSN: 0368-492X

Keywords

Content available
Article
Publication date: 1 March 1999

Alex M. Andrew

56

Abstract

Details

Kybernetes, vol. 28 no. 2
Type: Research Article
ISSN: 0368-492X

Keywords

Open Access
Article
Publication date: 7 July 2022

Sirilak Ketchaya and Apisit Rattanatranurak

Sorting is a very important algorithm to solve problems in computer science. The most well-known divide and conquer sorting algorithm is quicksort. It starts with dividing the…

1260

Abstract

Purpose

Sorting is a very important algorithm to solve problems in computer science. The most well-known divide and conquer sorting algorithm is quicksort. It starts with dividing the data into subarrays and finally sorting them.

Design/methodology/approach

In this paper, the algorithm named Dual Parallel Partition Sorting (DPPSort) is analyzed and optimized. It consists of a partitioning algorithm named Dual Parallel Partition (DPPartition). The DPPartition is analyzed and optimized in this paper and sorted with standard sorting functions named qsort and STLSort which are quicksort, and introsort algorithms, respectively. This algorithm is run on any shared memory/multicore systems. OpenMP library which supports multiprocessing programming is developed to be compatible with C/C++ standard library function. The authors’ algorithm recursively divides an unsorted array into two halves equally in parallel with Lomuto's partitioning and merge without compare-and-swap instructions. Then, qsort/STLSort is executed in parallel while the subarray is smaller than the sorting cutoff.

Findings

In the authors’ experiments, the 4-core Intel i7-6770 with Ubuntu Linux system is implemented. DPPSort is faster than qsort and STLSort up to 6.82× and 5.88× on Uint64 random distributions, respectively.

Originality/value

The authors can improve the performance of the parallel sorting algorithm by reducing the compare-and-swap instructions in the algorithm. This concept can be used to develop related problems to increase speedup of algorithms.

Details

Applied Computing and Informatics, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 2634-1964

Keywords

Open Access
Article
Publication date: 4 June 2019

Kultida Hattakitpanichakul, Rutja Phuphaibul, Srisamorn Phumonsakul and Chukiat Viwatwongkasem

The purpose of this paper is to examine the effectiveness of abstinence-based sexual education programs delivered in parallel to Thai parents and their early adolescent daughters…

2024

Abstract

Purpose

The purpose of this paper is to examine the effectiveness of abstinence-based sexual education programs delivered in parallel to Thai parents and their early adolescent daughters to promote sexual abstinence and improve communication regarding sexual topics between them and their parents.

Design/methodology/approach

A quasi-experimental design included groups of parent/daughter dyads; Group 1 (controls) (n=40), Group 2 Adolescent Program (n=40) and Group 3 Adolescent Parent Program (APP) (n=42). Outcome measures included parent–adolescent communications and adolescents’ sexual abstinence cognitions and intent to abstain from sexual behaviors, measured at five and nine weeks post-programs.

Findings

Generalized estimating equation analyses indicated that the dual program (APP) was more effective in increasing parental communication with their daughters compared with Group 1 (p-value<0.05) and only the daughters in the APP program reported more positive subjective norms, sense of perceived behavioral control and intent to abstain than did Group 1 (p-value<0.05).

Originality/value

The overarching goal of supporting the development of family environments where female adolescents are able to talk about sexuality is essential for adolescent sexual health promotion. The data provide further evidence that a dual program with simultaneous parent and female adolescent interactive activities over three sessions is superior compared with programs that target either the parents or the adolescents only. Hence, further replication with more parent–daughter dyads and then within more diverse cultures and populations is warranted. Developing and testing a similarly structured program for parents and sons is also required.

Details

Journal of Health Research, vol. 33 no. 4
Type: Research Article
ISSN: 2586-940X

Keywords

Content available

Abstract

Details

Kybernetes, vol. 35 no. 1/2
Type: Research Article
ISSN: 0368-492X

Open Access
Article
Publication date: 3 August 2020

Maryam AlJame and Imtiaz Ahmad

The evolution of technologies has unleashed a wealth of challenges by generating massive amount of data. Recently, biological data has increased exponentially, which has…

1151

Abstract

The evolution of technologies has unleashed a wealth of challenges by generating massive amount of data. Recently, biological data has increased exponentially, which has introduced several computational challenges. DNA short read alignment is an important problem in bioinformatics. The exponential growth in the number of short reads has increased the need for an ideal platform to accelerate the alignment process. Apache Spark is a cluster-computing framework that involves data parallelism and fault tolerance. In this article, we proposed a Spark-based algorithm to accelerate DNA short reads alignment problem, and it is called Spark-DNAligning. Spark-DNAligning exploits Apache Spark ’s performance optimizations such as broadcast variable, join after partitioning, caching, and in-memory computations. Spark-DNAligning is evaluated in term of performance by comparing it with SparkBWA tool and a MapReduce based algorithm called CloudBurst. All the experiments are conducted on Amazon Web Services (AWS). Results demonstrate that Spark-DNAligning outperforms both tools by providing a speedup in the range of 101–702 in aligning gigabytes of short reads to the human genome. Empirical evaluation reveals that Apache Spark offers promising solutions to DNA short reads alignment problem.

Details

Applied Computing and Informatics, vol. 19 no. 1/2
Type: Research Article
ISSN: 2634-1964

Keywords

Content available
Article
Publication date: 1 June 1998

John Galletly

68

Abstract

Details

Kybernetes, vol. 27 no. 4
Type: Research Article
ISSN: 0368-492X

Keywords

Content available
Article
Publication date: 1 March 2002

127

Abstract

Open Access
Article
Publication date: 16 October 2017

Pawel Sitek, Jaroslaw Wikarek and Peter Nielsen

The purpose of this paper is the need to build a novel approach that would allow flexible modeling and solving of food supply chain management (FSCM) problems. The models…

3952

Abstract

Purpose

The purpose of this paper is the need to build a novel approach that would allow flexible modeling and solving of food supply chain management (FSCM) problems. The models developed would use the data (data-driven modeling) as early as possible at the modeling phase, which would lead to a better and more realistic representation of the problems being modeled.

Design/methodology/approach

An essential feature of the presented approach is its declarativeness. The use of a declarative approach that additionally includes constraint satisfaction problems and provides an opportunity of fast and easy modeling of constrains different in type and character. Implementation of the proposed approach was performed with the use of an original hybrid method in which constraint logic programming (CLP) and mathematical programming (MP) are integrated and transformation of a model is used as a presolving technique.

Findings

The proposed constraint-driven approach has proved to be extremely flexible and efficient. The findings obtained during part of experiments dedicated to efficiency were very interesting. The use of the constraint-driven approach has enabled finding a solution depending on the instance data up to 1,000 times faster than using the MP.

Research limitations/implications

Due to the limited use of exact methods for NP-hard problems, the future study should be to integrate the CLP with environments other than the MP. It is also possible, e.g., with metaheuristics like genetic algorithms, ant colony optimization, etc.

Practical implications

There is a possibility of using the approach as a basis to build a decision support system for FSCM, simple integration with databases, enterprise resource planning systems, management information systems, etc.

Originality/value

The new constraint-driven approach to FSCM has been proposed. The proposed approach is an extension of the hybrid approach. Also, a new decision-making model of distribution and logistics for the food supply chain is built. A presolving technique for this model has been presented.

Open Access
Article
Publication date: 2 December 2016

Juan Aparicio

The purpose of this paper is to provide an outline of the major contributions in the literature on the determination of the least distance in data envelopment analysis (DEA). The…

2222

Abstract

Purpose

The purpose of this paper is to provide an outline of the major contributions in the literature on the determination of the least distance in data envelopment analysis (DEA). The focus herein is primarily on methodological developments. Specifically, attention is mainly paid to modeling aspects, computational features, the satisfaction of properties and duality. Finally, some promising avenues of future research on this topic are stated.

Design/methodology/approach

DEA is a methodology based on mathematical programming for the assessment of relative efficiency of a set of decision-making units (DMUs) that use several inputs to produce several outputs. DEA is classified in the literature as a non-parametric method because it does not assume a particular functional form for the underlying production function and presents, in this sense, some outstanding properties: the efficiency of firms may be evaluated independently on the market prices of the inputs used and outputs produced; it may be easily used with multiple inputs and outputs; a single score of efficiency for each assessed organization is obtained; this technique ranks organizations based on relative efficiency; and finally, it yields benchmarking information. DEA models provide both benchmarking information and efficiency scores for each of the evaluated units when it is applied to a dataset of observations and variables (inputs and outputs). Without a doubt, this benchmarking information gives DEA a distinct advantage over other efficiency methodologies, such as stochastic frontier analysis (SFA). Technical inefficiency is typically measured in DEA as the distance between the observed unit and a “benchmarking” target on the estimated piece-wise linear efficient frontier. The choice of this target is critical for assessing the potential performance of each DMU in the sample, as well as for providing information on how to increase its performance. However, traditional DEA models yield targets that are determined by the “furthest” efficient projection to the evaluated DMU. The projected point on the efficient frontier obtained as such may not be a representative projection for the judged unit, and consequently, some authors in the literature have suggested determining closest targets instead. The general argument behind this idea is that closer targets suggest directions of enhancement for the inputs and outputs of the inefficient units that may lead them to the efficiency with less effort. Indeed, authors like Aparicio et al. (2007) have shown, in an application on airlines, that it is possible to find substantial differences between the targets provided by applying the criterion used by the traditional DEA models, and those obtained when the criterion of closeness is utilized for determining projection points on the efficient frontier. The determination of closest targets is connected to the calculation of the least distance from the evaluated unit to the efficient frontier of the reference technology. In fact, the former is usually computed through solving mathematical programming models associated with minimizing some type of distance (e.g. Euclidean). In this particular respect, the main contribution in the literature is the paper by Briec (1998) on Hölder distance functions, where formally technical inefficiency to the “weakly” efficient frontier is defined through mathematical distances.

Findings

All the interesting features of the determination of closest targets from a benchmarking point of view have generated, in recent times, the increasing interest of researchers in the calculation of the least distance to evaluate technical inefficiency (Aparicio et al., 2014a). So, in this paper, we present a general classification of published contributions, mainly from a methodological perspective, and additionally, we indicate avenues for further research on this topic. The approaches that we cite in this paper differ in the way that the idea of similarity is made operative. Similarity is, in this sense, implemented as the closeness between the values of the inputs and/or outputs of the assessed units and those of the obtained projections on the frontier of the reference production possibility set. Similarity may be measured through multiple distances and efficiency measures. In turn, the aim is to globally minimize DEA model slacks to determine the closest efficient targets. However, as we will show later in the text, minimizing a mathematical distance in DEA is not an easy task, as it is equivalent to minimizing the distance to the complement of a polyhedral set, which is not a convex set. This complexity will justify the existence of different alternatives for solving these types of models.

Originality/value

As we are aware, this is the first survey in this topic.

Details

Journal of Centrum Cathedra, vol. 9 no. 2
Type: Research Article
ISSN: 1851-6599

Keywords

1 – 10 of over 2000