Search results

1 – 10 of 71
Article
Publication date: 1 June 2000

Witold Pedrycz and George Vukovich

In this study, we introduce and discuss a concept of fuzzy plug‐ins and investigate their role in system modeling. Fuzzy plug‐ins are rule‐based constructs augmenting a given…

Abstract

In this study, we introduce and discuss a concept of fuzzy plug‐ins and investigate their role in system modeling. Fuzzy plug‐ins are rule‐based constructs augmenting a given global model (arising in the form of some regression relationship, neural network, etc.) in the sense that they compensate for the mapping errors produced by the global model. The proposed design method develops around information granules of error defined in the output space and the induced fuzzy relations expressed in the space of input variables. The construction of the linguistic granules is carried out with the aid of context‐based fuzzy clustering – a generalized version of the well‐known FCM algorithm that is well‐suited to the design of fuzzy sets and relations being used as a blueprint of the plug‐ins. An overall modeling architecture combining the global model with its plug‐ins is discussed in detail and a complete design procedure is provided. Finally, some illustrative numerical examples are shown as well.

Details

Kybernetes, vol. 29 no. 4
Type: Research Article
ISSN: 0368-492X

Keywords

Article
Publication date: 1 July 1999

W. Pedrycz and E. Roventa

The concept of fuzzy information becomes a cornerstone of processing and handling linguistic data. As opposed to processing of numeric information where there is a wealth of…

Abstract

The concept of fuzzy information becomes a cornerstone of processing and handling linguistic data. As opposed to processing of numeric information where there is a wealth of advanced methods, by entering the area of linguistic information processing we are immediately faced with a genuine need to revisit the fundamental concepts. We first review a notion of information granularity as a primordial concept playing a key role in human cognition. Dwelling on that, the study embarks on the concept of interacting at the level of fuzzy sets. In particular, we discuss a basic construct of a fuzzy communication channel. The ideas of communication exploiting fuzzy information call for its efficient encoding and decoding that subsequently leads to minimal losses of transmitted information. Interestingly enough, the incurred losses depend heavily on the granularity of the linguistic information involved – in this way one can take advantage of the uncertainty residing within the transmitted information granules and exploit it in the design of the corresponding channel.

Details

Kybernetes, vol. 28 no. 5
Type: Research Article
ISSN: 0368-492X

Keywords

Article
Publication date: 23 November 2012

Kumar S. Ray

This paper aims to consider a soft computing approach to pattern classification using the basic tools of fuzzy relational calculus (FRC) and genetic algorithm (GA).

Abstract

Purpose

This paper aims to consider a soft computing approach to pattern classification using the basic tools of fuzzy relational calculus (FRC) and genetic algorithm (GA).

Design/methodology/approach

The paper introduces a new interpretation of multidimensional fuzzy implication (MFI) to represent the author's knowledge about the training data set. It also considers the notion of a fuzzy pattern vector (FPV) to handle the fuzzy information granules of the quantized pattern space and to represent a population of training patterns in the quantized pattern space. The construction of the pattern classifier is essentially based on the estimate of a fuzzy relation Ri between the antecedent clause and consequent clause of each one‐dimensional fuzzy implication. For the estimation of Ri floating point representation of GA is used. Thus, a set of fuzzy relations is formed from the new interpretation of MFI. This set of fuzzy relations is termed as the core of the pattern classifier. Once the classifier is constructed the non‐fuzzy features of a test pattern can be classified.

Findings

The performance of the proposed scheme is tested on synthetic data. Subsequently, the paper uses the proposed scheme for the vowel classification problem of an Indian language. In all these case studies the recognition score of the proposed method is very good. Finally, a benchmark of performance is established by considering Multilayer Perceptron (MLP), Support Vector Machine (SVM) and the proposed method. The Abalone, Hosse colic and Pima Indians data sets, obtained from UCL database repository are used for the said benchmark study. The benchmark study also establishes the superiority of the proposed method.

Originality/value

This new soft computing approach to pattern classification is based on a new interpretation of MFI and a novel notion of FPV. A set of fuzzy relations which is the core of the pattern classifier, is estimated using floating point GA and very effective classification of patterns under vague and imprecise environment is performed. This new approach to pattern classification avoids the curse of high dimensionality of feature vector. It can provide multiple classifications under overlapped classes.

Details

International Journal of Intelligent Computing and Cybernetics, vol. 5 no. 4
Type: Research Article
ISSN: 1756-378X

Keywords

Article
Publication date: 17 August 2018

Xiaoqing Chen, Xinwang Liu and Zaiwu Gong

The purpose of this paper is to combine the uncertain methods of type-2 fuzzy sets and data envelopment analysis (DEA) evaluation model together. A new type-2 fuzzy DEA efficiency…

Abstract

Purpose

The purpose of this paper is to combine the uncertain methods of type-2 fuzzy sets and data envelopment analysis (DEA) evaluation model together. A new type-2 fuzzy DEA efficiency assessment method is established. Then the proposed procedure is applied to the poverty alleviation problem.

Design/methodology/approach

The research method is the DEA model, which is an effective method for efficiency assessment of social–economic systems. Considering the existence of the same efficiency values that cannot be ranked in the proposed DEA model, the balance index is introduced to solve the ranking problem of decision-making units effectively.

Findings

The results show that the proposed method can not only measure the efficiency of the existence of uncertain information but also deal with the ranking of multiple efficient decision-making units.

Originality/value

This paper selects type-2 fuzzy DEA model to express a lot of uncertain information in efficiency evaluation problems. We use the parameter decomposition method of type-2 fuzzy programming or the type-2 expectation values indirectly. The balance index is proposed to further distinguish the multiple effective decision-making units. Furthermore, this paper selects rural poverty alleviation in Hainan Province as a case study to verify the feasibility of the method. The relative efficiency values in different years are calculated and analyzed.

Details

Kybernetes, vol. 48 no. 5
Type: Research Article
ISSN: 0368-492X

Keywords

Article
Publication date: 5 August 2014

Roopesh Kevin Sungkur and Mayvin Ramasawmy

The purpose of this paper is to propose Knowledge4Scrum, a novel knowledge management tool for agile distributed teams. Agile software development (ASD) refers to a group of…

1796

Abstract

Purpose

The purpose of this paper is to propose Knowledge4Scrum, a novel knowledge management tool for agile distributed teams. Agile software development (ASD) refers to a group of software development methodologies based on iterative development, where requirements and solutions evolve through collaboration between self-organizing cross-functional teams. The two most widely used methodologies based on the agile philosophy are scrum and extreme programming. Whichever methodology is considered, agile teams usually consist of few members and are collocated under the same roof. However, nowadays, agile practices for distributed development are gaining much momentum. The main reasons behind such practice are cheaper skilled labour, minimizing production cost, reducing time to market and increasing the quality and performance of projects. Along with the benefits obtained through globally distributed development, there are, however, many difficulties faced by various organisations. These problems are caused mostly due to distance, time and cultural differences. To meet up with the level of complexity of projects, ASD also has to keep up with many challenges, especially in cases of distributed teams. Four major challenges have been identified. First, the introduction of global software development entails a number of difficulties, especially related to knowledge sharing. For instance, lack of transparency is frequently observed within such teams, whereby a team member is totally unaware of the activities of his/her colleagues. Second, the unavailability of team members due to time zone differences adds up to the list of problems confronted by distributed teams. Third, there can be misunderstanding amongst the team member due to communication problems, especially in cases where the mother language of the team members is different. Fourth, a common issue faced by distributed teams is the loss of knowledge when an employee resigns from his/her post.

Design/methodology/approach

Based on the main problems outlined above, what has been proposed is Knowledge4Scrum, a novel knowledge management tool for agile distributed teams. Knowledge4Scrum will act as a global repository for knowledge sharing in Scrum distributed teams with the possibility of creating new knowledge through data mining techniques. Valid past projects data have been collected to train and test the data mining models. The research also investigates the suitability of knowledge management in Scrum distributed teams to address the various challenges addressed above.

Findings

Knowledge4Scrum supports the four knowledge management processes, namely, knowledge creation/acquisition, knowledge storage, knowledge dissemination and knowledge application. It has been found that the aforementioned tool satisfactorily addressed issues of distance, time and cultural differences that crop-up in distributed development teams. Data mining has been the main aspect for the knowledge creation and application processes, whereby new knowledge has been determined by examining and extracting patterns from existing data found in the repository.

Originality/value

A major feature of the Knowledge4Scrum tool lies in the knowledge creation and application section, where a number of data mining techniques have been utilised to identify trends and patterns in past data collected. When compared to the COnstructive COst MOdel to estimate project duration, Knowledge4Scrum gives more than satisfactory results. Such functionalities will actually help managers for future project planning and in decision-making.

Article
Publication date: 10 December 2020

Zhixiong Li, Morteza Jamshidian, Sayedali Mousavi, Arash Karimipour and Iskander Tlili

In this paper, the uncertainties important components and the structure status are obtained by using the condition monitoring, expert groups and multiple membership functions by…

Abstract

Purpose

In this paper, the uncertainties important components and the structure status are obtained by using the condition monitoring, expert groups and multiple membership functions by creating a fuzzy system in MATLAB software.

Design/methodology/approach

In the form of fuzzy type, the average structural safety must be followed to replace the damages or to absolutely control the decision-making. Uncertainty in the functionality of hydraulic automated guided vehicles (AGVs), without knowing the reliability of pieces, can cause failure in the manufacturing process. It can be controlled by the condition monitoring pieces done by measurement errors and ambiguous boundaries.

Findings

As a result, this monitoring could increase productivity with higher quality in delivery in flexible manufacturing systems with an increase of 70% reliability mutilation for the hydraulic AGV parts.

Originality/value

Hydraulic AGVs play a vital role in flexible manufacturing in recent years. Lately, several strategies for maintenance and repairing of hydraulic AGVs exist in the industry but are still confronted with many uncertainties. The hydraulic AGV is faced with uncertainty after 10 years of working in terms of reliability. Reconstruction of the old parts with the new parts may not have the quality and durability.

Details

International Journal of Numerical Methods for Heat & Fluid Flow, vol. 31 no. 5
Type: Research Article
ISSN: 0961-5539

Keywords

Article
Publication date: 7 March 2016

Yajun Leng, Qing Lu and Changyong Liang

Collaborative recommender systems play a crucial role in providing personalized services to online consumers. Most online shopping sites and many other applications now use the…

Abstract

Purpose

Collaborative recommender systems play a crucial role in providing personalized services to online consumers. Most online shopping sites and many other applications now use the collaborative recommender systems. The measurement of the similarity plays a fundamental role in collaborative recommender systems. Some of the most well-known similarity measures are: Pearson’s correlation coefficient, cosine similarity and mean squared differences. However, due to data sparsity, accuracy of the above similarity measures decreases, which makes the formation of inaccurate neighborhood, thereby resulting in poor recommendations. The purpose of this paper is to propose a novel similarity measure based on potential field.

Design/methodology/approach

The proposed approach constructs a dense matrix: user-user potential matrix, and uses this matrix to compute potential similarities between users. Then the potential similarities are modified based on users’ preliminary neighborhoods, and k users with the highest modified similarity values are selected as the active user’s nearest neighbors. Compared to the rating matrix, the potential matrix is much denser. Thus, the sparsity problem can be efficiently alleviated. The similarity modification scheme considers the number of common neighbors of two users, which can further improve the accuracy of similarity computation.

Findings

Experimental results show that the proposed approach is superior to the traditional similarity measures.

Originality/value

The research highlights of this paper are as follows: the authors construct a dense matrix: user-user potential matrix, and use this matrix to compute potential similarities between users; the potential similarities are modified based on users’ preliminary neighborhoods, and k users with the highest modified similarity values are selected as the active user’s nearest neighbors; and the proposed approach performs better than the traditional similarity measures. The manuscript will be of particular interests to the scientists interested in recommender systems research as well as to readers interested in solution of related complex practical engineering problems.

Details

Kybernetes, vol. 45 no. 3
Type: Research Article
ISSN: 0368-492X

Keywords

Article
Publication date: 19 June 2007

W. Pedrycz

The purpose is to formulate and present algorithms of reconciliation of perception of information granules regarded as fuzzy sets. It also discussed a problem of a multi‐view…

Abstract

Purpose

The purpose is to formulate and present algorithms of reconciliation of perception of information granules regarded as fuzzy sets. It also discussed a problem of a multi‐view reconciliation of perception of granular mappings and their reconciliation.

Design/methodology/approach

It is realized in the framework of logically‐oriented transformation of the membership functions and mappings.

Findings

A suite of optimization techniques is presented and their performance illustrated with the aid of numeric experiments.

Practical implications

An important step enhancing the development of fuzzy rule‐based systems.

Originality/value

The concept of reconciliation of information granules has been formulated for the first time. The algorithmic setting offers additional practical value.

Details

Kybernetes, vol. 36 no. 5/6
Type: Research Article
ISSN: 0368-492X

Keywords

Article
Publication date: 10 August 2010

Witold Pedrycz

The purpose of this paper is to show that exploiting fundamental ideas of granular computing can lead to further conceptual developments of granular metastructures, which are…

Abstract

Purpose

The purpose of this paper is to show that exploiting fundamental ideas of granular computing can lead to further conceptual developments of granular metastructures, which are inherently associated with computing involving a large number of individual datasets; and to show that such processing leads to the representatives of information granules and granular models in the form of metastructures and metamodels.

Design/methodology/approach

The formulation of the concept of granular metastructures is provided and presented along with some essential algorithmic developments and associated optimization strategies. The overall methodological framework is the one of granular computing, especially fuzzy sets and fuzzy sets of higher type. Given the structural facet of optimization, the paper stresses the relevance of the use of evolutionary optimization.

Findings

This paper focused on the underlying concepts and while it elaborated on some development aspects and optimization tools, it should be stressed that further refinement and a thorough exploitation of optimization techniques in application to the inherently combinatorial facet of the problem are to be pursued in detail.

Practical implications

The introduced approach and algorithms could be of interest when solving problems of granular metastructures, in particular those encountered in knowledge‐based systems.

Originality/value

The main aspects of originality concern a formulation of the concept of granular metastructures and their design, based on granular evidence (experimental data) of lower type. A constructive way of forming type‐2 fuzzy sets via the principle of justifiable granularity exhibits a significant level of originality and offers a general way of designing information granules.

Details

Kybernetes, vol. 39 no. 7
Type: Research Article
ISSN: 0368-492X

Keywords

Article
Publication date: 1 March 2001

Witold Pedrycz and Adam Gacek

Shows that signal quantization can be conveniently captured and quantified in the language of information granules. Optimal codebooks exploited in any signal quantization…

Abstract

Shows that signal quantization can be conveniently captured and quantified in the language of information granules. Optimal codebooks exploited in any signal quantization (discretization) lend themselves to the underlying fundamental issues of information granulation. The paper elaborates on and contrasts between various forms of information granulation such as set theory, shadowed sets, and fuzzy sets. It is revealed that a set‐based codebook can be easily enhanced by the use of the shadowed sets. This also raises awareness about the performance of the quantization process and helps increase its quality by defining additional elements of the codebook and specifying their range of applicability. We show how different information granules contribute to the performance of signal quantification. The role of clustering techniques giving rise to information granules is also analyzed. Some pertinent theoretical results are derived. It is shown that fuzzy sets defined in terms of piecewise linear membership functions with 1/2 overlap between any two adjacent terms of the codebook give rise to the effect of lossless quantization. The study addresses both scalar and multivariable quantization. Numerical studies are included to help illustrate the quantization mechanisms carried out in the setting of granular computing.

Details

Kybernetes, vol. 30 no. 2
Type: Research Article
ISSN: 0368-492X

Keywords

1 – 10 of 71