Search results

1 – 10 of over 1000
Open Access
Article
Publication date: 20 July 2020

E.N. Osegi

In this paper, an emerging state-of-the-art machine intelligence technique called the Hierarchical Temporal Memory (HTM) is applied to the task of short-term load forecasting…

Abstract

In this paper, an emerging state-of-the-art machine intelligence technique called the Hierarchical Temporal Memory (HTM) is applied to the task of short-term load forecasting (STLF). A HTM Spatial Pooler (HTM-SP) stage is used to continually form sparse distributed representations (SDRs) from a univariate load time series data, a temporal aggregator is used to transform the SDRs into a sequential bivariate representation space and an overlap classifier makes temporal classifications from the bivariate SDRs through time. The comparative performance of HTM on several daily electrical load time series data including the Eunite competition dataset and the Polish power system dataset from 2002 to 2004 are presented. The robustness performance of HTM is also further validated using hourly load data from three more recent electricity markets. The results obtained from experimenting with the Eunite and Polish dataset indicated that HTM will perform better than the existing techniques reported in the literature. In general, the robustness test also shows that the error distribution performance of the proposed HTM technique is positively skewed for most of the years considered and with kurtosis values mostly lower than a base value of 3 indicating a reasonable level of outlier rejections.

Details

Applied Computing and Informatics, vol. 17 no. 2
Type: Research Article
ISSN: 2634-1964

Keywords

Article
Publication date: 29 January 2021

Junying Chen, Zhanshe Guo, Fuqiang Zhou, Jiangwen Wan and Donghao Wang

As the limited energy of wireless sensor networks (WSNs), energy-efficient data-gathering algorithms are required. This paper proposes a compressive data-gathering algorithm based…

Abstract

Purpose

As the limited energy of wireless sensor networks (WSNs), energy-efficient data-gathering algorithms are required. This paper proposes a compressive data-gathering algorithm based on double sparse structure dictionary learning (DSSDL). The purpose of this paper is to reduce the energy consumption of WSNs.

Design/methodology/approach

The historical data is used to construct a sparse representation base. In the dictionary-learning stage, the sparse representation matrix is decomposed into the product of double sparse matrices. Then, in the update stage of the dictionary, the sparse representation matrix is orthogonalized and unitized. The finally obtained double sparse structure dictionary is applied to the compressive data gathering in WSNs.

Findings

The dictionary obtained by the proposed algorithm has better sparse representation ability. The experimental results show that, the sparse representation error can be reduced by at least 3.6% compared with other dictionaries. In addition, the better sparse representation ability makes the WSNs achieve less measurement times under the same accuracy of data gathering, which means more energy saving. According to the results of simulation, the proposed algorithm can reduce the energy consumption by at least 2.7% compared with other compressive data-gathering methods under the same data-gathering accuracy.

Originality/value

In this paper, the double sparse structure dictionary is introduced into the compressive data-gathering algorithm in WSNs. The experimental results indicate that the proposed algorithm has good performance on energy consumption and sparse representation.

Details

Sensor Review, vol. 41 no. 1
Type: Research Article
ISSN: 0260-2288

Keywords

Article
Publication date: 1 March 1995

Michael Griebel and Veronika Thurner

We study the sparse grid combination technique as an efficient methodfor the solution of fluid dynamics problems. The combination technique needsonlyO(h–1n(log(h–1n))d–1)grid…

Abstract

We study the sparse grid combination technique as an efficient method for the solution of fluid dynamics problems. The combination technique needs only O(h–1n(log(h–1n))d–1) grid points for d‐dimensional problems, instead of O(h–dn) grid points used by the full grid method. Here, hn = 2–n denotes the mesh width of the grids. Furthermore, provided that the solution is sufficiently smooth, the accuracy (with respect to the L2‐ and the L‐norm) of the sparse grid combination solution is O(hαn(log(h–1n))d–1), which is only slightly worse than O(hαn) obtained by the full grid solution. Here, α includes the order of the underlying discretization scheme, as well as the influence of singularities. Thus, the combination technique is very economic on both storage requirements and computing time, but achieves almost the same accuracy as the usual full grid solution. Another advantage of the combination technique is that only simple data structures are necessary. Where other sparse grid methods need hierarchical data structures and thus specially designed solvers, the combination method handles merely d‐dimensional arrays. Thus, the implementation of the combination technique can be based on any “black box solver”. However, for reasons of efficiency, an appropriate multigrid solver should be used. Often, fluid dynamics problems have to be solved on rather complex domains. A common approach is to divide the domain into blocks, in order to facilitate the handling of the problem. We show that the combination technique works on such blockstructured grids as well. When dealing with complicated domains, it is often desirable to grade a grid around a singularity. Graded grids are also supported by the combination technique. Finally, we present the first results of numerical experiments for the application of the combination method to CFD problems. There, we consider two‐dimensional laminar flow problems with moderate Reynolds numbers, and discuss the advantages of the combination method.

Details

International Journal of Numerical Methods for Heat & Fluid Flow, vol. 5 no. 3
Type: Research Article
ISSN: 0961-5539

Keywords

Article
Publication date: 5 June 2017

Zhoufeng Liu, Lei Yan, Chunlei Li, Yan Dong and Guangshuai Gao

The purpose of this paper is to find an efficient fabric defect detection algorithm by means of exploring the sparsity characteristics of main local binary pattern (MLBP…

Abstract

Purpose

The purpose of this paper is to find an efficient fabric defect detection algorithm by means of exploring the sparsity characteristics of main local binary pattern (MLBP) extracted from the original fabric texture.

Design/methodology/approach

In the proposed algorithm, original LBP features are extracted from the fabric texture to be detected, and MLBP are selected by occurrence probability. Second, a dictionary is established with MLBP atoms which can sparsely represent all the LBP. Then, the value of the gray-scale difference between gray level of neighborhood pixels and the central pixel, and the mean of the difference which has the same MLBP feature are calculated. And then, the defect-contained image is reconstructed as normal texture image. Finally, the residual is calculated between reconstructed and original images, and a simple threshold segmentation method can divide the residual image, and the defective region is detected.

Findings

The experiment result shows that the fabric texture can be more efficiently reconstructed, and the proposed method achieves better defect detection performance. Moreover, it offers empirical insights about how to exploit the sparsity of one certain feature, e.g. LBP.

Research limitations/implications

Because of the selected research approach, the results may lack generalizability in chambray. Therefore, researchers are encouraged to test the proposed propositions further.

Originality/value

In this paper, a novel fabric defect detection method which extracts the sparsity of MLBP features is proposed.

Details

International Journal of Clothing Science and Technology, vol. 29 no. 3
Type: Research Article
ISSN: 0955-6222

Keywords

Article
Publication date: 19 December 2023

Jinchao Huang

Single-shot multi-category clothing recognition and retrieval play a crucial role in online searching and offline settlement scenarios. Existing clothing recognition methods based…

Abstract

Purpose

Single-shot multi-category clothing recognition and retrieval play a crucial role in online searching and offline settlement scenarios. Existing clothing recognition methods based on RGBD clothing images often suffer from high-dimensional feature representations, leading to compromised performance and efficiency.

Design/methodology/approach

To address this issue, this paper proposes a novel method called Manifold Embedded Discriminative Feature Selection (MEDFS) to select global and local features, thereby reducing the dimensionality of the feature representation and improving performance. Specifically, by combining three global features and three local features, a low-dimensional embedding is constructed to capture the correlations between features and categories. The MEDFS method designs an optimization framework utilizing manifold mapping and sparse regularization to achieve feature selection. The optimization objective is solved using an alternating iterative strategy, ensuring convergence.

Findings

Empirical studies conducted on a publicly available RGBD clothing image dataset demonstrate that the proposed MEDFS method achieves highly competitive clothing classification performance while maintaining efficiency in clothing recognition and retrieval.

Originality/value

This paper introduces a novel approach for multi-category clothing recognition and retrieval, incorporating the selection of global and local features. The proposed method holds potential for practical applications in real-world clothing scenarios.

Details

International Journal of Intelligent Computing and Cybernetics, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 1756-378X

Keywords

Article
Publication date: 19 June 2017

Qi Wang, Pengcheng Zhang, Jianming Wang, Qingliang Chen, Zhijie Lian, Xiuyan Li, Yukuan Sun, Xiaojie Duan, Ziqiang Cui, Benyuan Sun and Huaxiang Wang

Electrical impedance tomography (EIT) is a technique for reconstructing the conductivity distribution by injecting currents at the boundary of a subject and measuring the…

Abstract

Purpose

Electrical impedance tomography (EIT) is a technique for reconstructing the conductivity distribution by injecting currents at the boundary of a subject and measuring the resulting changes in voltage. Image reconstruction for EIT is a nonlinear problem. A generalized inverse operator is usually ill-posed and ill-conditioned. Therefore, the solutions for EIT are not unique and highly sensitive to the measurement noise.

Design/methodology/approach

This paper develops a novel image reconstruction algorithm for EIT based on patch-based sparse representation. The sparsifying dictionary optimization and image reconstruction are performed alternately. Two patch-based sparsity, namely, square-patch sparsity and column-patch sparsity, are discussed and compared with the global sparsity.

Findings

Both simulation and experimental results indicate that the patch based sparsity method can improve the quality of image reconstruction and tolerate a relatively high level of noise in the measured voltages.

Originality/value

EIT image is reconstructed based on patch-based sparse representation. Square-patch sparsity and column-patch sparsity are proposed and compared. Sparse dictionary optimization and image reconstruction are performed alternately. The new method tolerates a relatively high level of noise in measured voltages.

Details

Sensor Review, vol. 37 no. 3
Type: Research Article
ISSN: 0260-2288

Keywords

Article
Publication date: 1 January 2014

Xiaoyan Zhuang, Yijiu Zhao, Li Wang and Houjun Wang

The purpose of this paper is to present a compressed sensing (CS)-based sampling system for ultra-wide-band (UWB) signal. By exploiting the sparsity of signal, this new sampling…

Abstract

Purpose

The purpose of this paper is to present a compressed sensing (CS)-based sampling system for ultra-wide-band (UWB) signal. By exploiting the sparsity of signal, this new sampling system can sub-Nyquist sample a multiband UWB signal, whose unknown frequency support occupies only a small portion of a wide spectrum.

Design/methodology/approach

A random Rademacher sequence is used to sense the signal in the frequency domain, and a matrix constructed by Hadamard basis is used to compress the signal. The probability of reconstruction is proved mathematically, and the reconstruction matrix is developed in the frequency domain.

Findings

Simulation results indicate that, with an ultra-low sampling rate, the proposed system can capture and reconstruct sparse multiband UWB signals with high probability. For sparse multiband UWB signals, the proposed system has potential to break through the Shannon theorem.

Originality/value

Different from the traditional sub-Nyquist techniques, the proposed sampling system not only breaks through the limitation of Shannon theorem but also avoids the barrier of input bandwidth of analog-to-digital converters (ADCs).

Details

COMPEL: The International Journal for Computation and Mathematics in Electrical and Electronic Engineering, vol. 33 no. 1/2
Type: Research Article
ISSN: 0332-1649

Keywords

Book part
Publication date: 24 September 2010

Elizabeth H. Gorman and Fiona M. Kay

Although law schools have seen rising representation of diverse racial and ethnic groups among students, minorities continue to represent disproportionately small percentages of…

Abstract

Although law schools have seen rising representation of diverse racial and ethnic groups among students, minorities continue to represent disproportionately small percentages of lawyers within large corporate law firms. Prior research on the nature and causes of minority underrepresentation in such firms has been sparse. In this paper, we use data on a national sample of more than 1,300 law firm offices to examine variation across large U.S. law firms in the representation of African-Americans, Hispanics, and Asian-Americans. Overall, minorities are better represented in offices located in Western states and in major metropolitan areas; offices that are larger and affiliated with larger firms; offices of firms with higher revenues and profits per partner; offices with greater associate–partner leverage; and branch offices rather than principal offices. They are equally distributed between offices with single-tier and two-tier partnerships. Distinct patterns emerge, however, when the three groups are considered separately and when hierarchical rank within firms is taken into account.

Details

Special Issue Law Firms, Legal Culture, and Legal Practice
Type: Book
ISBN: 978-0-85724-357-7

Article
Publication date: 23 August 2019

Shenlong Wang, Kaixin Han and Jiafeng Jin

In the past few decades, the content-based image retrieval (CBIR), which focuses on the exploration of image feature extraction methods, has been widely investigated. The term of…

Abstract

Purpose

In the past few decades, the content-based image retrieval (CBIR), which focuses on the exploration of image feature extraction methods, has been widely investigated. The term of feature extraction is used in two cases: application-based feature expression and mathematical approaches for dimensionality reduction. Feature expression is a technique of describing the image color, texture and shape information with feature descriptors; thus, obtaining effective image features expression is the key to extracting high-level semantic information. However, most of the previous studies regarding image feature extraction and expression methods in the CBIR have not performed systematic research. This paper aims to introduce the basic image low-level feature expression techniques for color, texture and shape features that have been developed in recent years.

Design/methodology/approach

First, this review outlines the development process and expounds the principle of various image feature extraction methods, such as color, texture and shape feature expression. Second, some of the most commonly used image low-level expression algorithms are implemented, and the benefits and drawbacks are summarized. Third, the effectiveness of the global and local features in image retrieval, including some classical models and their illustrations provided by part of our experiment, are analyzed. Fourth, the sparse representation and similarity measurement methods are introduced, and the retrieval performance of statistical methods is evaluated and compared.

Findings

The core of this survey is to review the state of the image low-level expression methods and study the pros and cons of each method, their applicable occasions and certain implementation measures. This review notes that image peculiarities of single-feature descriptions may lead to unsatisfactory image retrieval capabilities, which have significant singularity and considerable limitations and challenges in the CBIR.

Originality/value

A comprehensive review of the latest developments in image retrieval using low-level feature expression techniques is provided in this paper. This review not only introduces the major approaches for image low-level feature expression but also supplies a pertinent reference for those engaging in research regarding image feature extraction.

Article
Publication date: 1 June 2003

Jaroslav Mackerle

This paper gives a bibliographical review of the finite element and boundary element parallel processing techniques from the theoretical and application points of view. Topics…

1207

Abstract

This paper gives a bibliographical review of the finite element and boundary element parallel processing techniques from the theoretical and application points of view. Topics include: theory – domain decomposition/partitioning, load balancing, parallel solvers/algorithms, parallel mesh generation, adaptive methods, and visualization/graphics; applications – structural mechanics problems, dynamic problems, material/geometrical non‐linear problems, contact problems, fracture mechanics, field problems, coupled problems, sensitivity and optimization, and other problems; hardware and software environments – hardware environments, programming techniques, and software development and presentations. The bibliography at the end of this paper contains 850 references to papers, conference proceedings and theses/dissertations dealing with presented subjects that were published between 1996 and 2002.

Details

Engineering Computations, vol. 20 no. 4
Type: Research Article
ISSN: 0264-4401

Keywords

1 – 10 of over 1000