Search results

1 – 10 of over 7000
To view the access options for this content please click here
Article
Publication date: 21 June 2013

Hendry Raharjo

This paper aims to investigate the need of normalizing the relationship matrix in quality function deployment (QFD), especially when it leads to rank reversal, and…

Abstract

Purpose

This paper aims to investigate the need of normalizing the relationship matrix in quality function deployment (QFD), especially when it leads to rank reversal, and eventually provide a guideline to know when it should be done.

Design/methodology/approach

The research was carried out based on some empirical observations and previous research data.

Findings

A rule of thumb is proposed to know when the rank reversal, as a result of normalizing QFD relationship matrix, can be desirable or undesirable.

Research limitations/implications

Since the rule of thumb is based on empirical basis, it might not work perfectly for every single case, especially for large‐sized QFD matrices. Hence, this opens up a new challenge for future research to complement the current findings.

Practical implications

This paper shows that any QFD practitioner should be aware of the fact that normalization in the QFD relationship matrix is not a trivial issue, especially when it causes rank reversal. Ignoring normalization might cause potentially misleading results. However, using normalization does not always guarantee that one may obtain reliable results.

Originality/value

There are two novel findings in this paper. First, it is the exposition of the pros and cons of normalization in QFD relationship matrix. Second, it is the proposed rule of thumb which may serve as an important guideline for any QFD practitioner when dealing with the relationship matrix.

Details

International Journal of Quality & Reliability Management, vol. 30 no. 6
Type: Research Article
ISSN: 0265-671X

Keywords

To view the access options for this content please click here
Article
Publication date: 24 June 2019

Nazanin Vafaei, Rita A. Ribeiro, Luis M. Camarinha-Matos and Leonilde Rocha Valera

Normalization is a crucial step in all decision models, to produce comparable and dimensionless data from heterogeneous data. As such, various normalization techniques are…

Abstract

Purpose

Normalization is a crucial step in all decision models, to produce comparable and dimensionless data from heterogeneous data. As such, various normalization techniques are available but their performance depends on a number of characteristics of the problem at hand. Thus, this study aims to introduce a recommendation framework for supporting users to select data normalization techniques that better fit the requirements in different application scenarios, based on multi-criteria decision methods.

Design/methodology/approach

Following the proposed approach, the authors compare six well-known normalization techniques applied to a case study of selecting suppliers in collaborative networks.

Findings

With this recommendation framework, the authors expect to contribute to improving the normalization of criteria in the evaluation and selection of suppliers and business partners in dynamic networked collaborative systems.

Originality/value

This is the first study about comparing normalization techniques for selecting the best normalization in dynamic multiple-criteria decision-making models in collaborative networks.

Details

Kybernetes, vol. 49 no. 4
Type: Research Article
ISSN: 0368-492X

Keywords

To view the access options for this content please click here
Article
Publication date: 5 January 2010

Olusegun Folorunso and AdioTaofeek Akinwale

In tertiary institution, some students find it hard to learn database design theory, in particular, database normalization. The purpose of this paper is to develop a…

Abstract

Purpose

In tertiary institution, some students find it hard to learn database design theory, in particular, database normalization. The purpose of this paper is to develop a visualization tool to give students an interactive hands‐on experience in database normalization process.

Design/methodology/approach

The model‐view‐controller architecture is used to alleviate the black box syndrome associated with the study of algorithm behavior for database normalization process. The authors propose a visualization “exploratory” tool that assists the learners in understanding the actual behavior of the database normalization algorithms of choice and also in evaluating the validity/quality of the algorithm. This paper describes the visualization tool and its effectiveness in teaching and learning normalization forms and their functional dependencies.

Findings

The effectiveness of the tool has been evaluated in surveys. It shows that students generally viewed the tool more positively than the textbook technique. This difference is significant to p<0.05 (t=1.645). The mean interactions precision and calculated value using expert judge relevance ratings show a significant difference between visualization tool and textbook performance 3.74 against 2.61 for precision with calculated t=6.69.

Originality/value

The visualization tool helped students validate/check their learning of normalization process. Consequently, the paper shows that the tool has a positive impact on students' perception.

Details

Campus-Wide Information Systems, vol. 27 no. 1
Type: Research Article
ISSN: 1065-0741

Keywords

To view the access options for this content please click here
Article
Publication date: 1 December 2003

M.J. Taylor, S. Wade and D. England

Designing a truly customer focused Web site can be difficult. This paper examines the approach of adapting the existing database design technique of normalisation for…

Abstract

Designing a truly customer focused Web site can be difficult. This paper examines the approach of adapting the existing database design technique of normalisation for achieving customer focused Web site design. A Web site can be thought of as a multimedia database, in fact under European law a Web site is legally classified as being a database. In traditional database design, normalisation is used to structure data and provide efficient keys for data retrieval. In Web site normalisation “data” is now the text, images, and functions (for example, e‐mail and ordering goods) that are required on the Web site. Keys for “data” are now Web site headings, sub‐groupings, and topics. Web site normalisation allows the Web site designer to structure Web site material and identify the optimal Web site navigational structure from a customer perspective, and thus, produce a truly customer focused Web site. A case study in a UK marketing organisation is provided in order to demonstrate and evaluate Web site normalisation in action.

Details

Internet Research, vol. 13 no. 5
Type: Research Article
ISSN: 1066-2243

Keywords

To view the access options for this content please click here
Article
Publication date: 3 November 2020

Jagroop Kaur and Jaswinder Singh

Normalization is an important step in all the natural language processing applications that are handling social media text. The text from social media poses a different…

Abstract

Purpose

Normalization is an important step in all the natural language processing applications that are handling social media text. The text from social media poses a different kind of problems that are not present in regular text. Recently, a considerable amount of work has been done in this direction, but mostly in the English language. People who do not speak English code mixed the text with their native language and posted text on social media using the Roman script. This kind of text further aggravates the problem of normalizing. This paper aims to discuss the concept of normalization with respect to code-mixed social media text, and a model has been proposed to normalize such text.

Design/methodology/approach

The system is divided into two phases – candidate generation and most probable sentence selection. Candidate generation task is treated as machine translation task where the Roman text is treated as source language and Gurmukhi text is treated as the target language. Character-based translation system has been proposed to generate candidate tokens. Once candidates are generated, the second phase uses the beam search method for selecting the most probable sentence based on hidden Markov model.

Findings

Character error rate (CER) and bilingual evaluation understudy (BLEU) score are reported. The proposed system has been compared with Akhar software and RB\_R2G system, which are also capable of transliterating Roman text to Gurmukhi. The performance of the system outperforms Akhar software. The CER and BLEU scores are 0.268121 and 0.6807939, respectively, for ill-formed text.

Research limitations/implications

It was observed that the system produces dialectical variations of a word or the word with minor errors like diacritic missing. Spell checker can improve the output of the system by correcting these minor errors. Extensive experimentation is needed for optimizing language identifier, which will further help in improving the output. The language model also seeks further exploration. Inclusion of wider context, particularly from social media text, is an important area that deserves further investigation.

Practical implications

The practical implications of this study are: (1) development of parallel dataset containing Roman and Gurmukhi text; (2) development of dataset annotated with language tag; (3) development of the normalizing system, which is first of its kind and proposes translation based solution for normalizing noisy social media text from Roman to Gurmukhi. It can be extended for any pair of scripts. (4) The proposed system can be used for better analysis of social media text. Theoretically, our study helps in better understanding of text normalization in social media context and opens the doors for further research in multilingual social media text normalization.

Originality/value

Existing research work focus on normalizing monolingual text. This study contributes towards the development of a normalization system for multilingual text.

Details

International Journal of Intelligent Computing and Cybernetics, vol. 13 no. 4
Type: Research Article
ISSN: 1756-378X

Keywords

To view the access options for this content please click here
Article
Publication date: 25 September 2019

Nabil Moukafih, Ghizlane Orhanou and Said Elhajji

This paper aims to propose a mobile agent-based security information and event management architecture (MA-SIEM) that uses mobile agents for near real-time event…

Abstract

Purpose

This paper aims to propose a mobile agent-based security information and event management architecture (MA-SIEM) that uses mobile agents for near real-time event collection and normalization on the source device. The externalization of the normalization process, executed by several distributed mobile agents on interconnected computers and devices, proposes a SIEM server dedicated mainly for correlation and analysis.

Design/methodology/approach

The architecture has been proposed in three stages. In the first step, the authors described the different aspects of the proposed approach. Then they implemented the proposed architecture and presented a new vision for the insertion of normalized data into the SIEM database. Finally, the authors performed a numerical comparison between the approach used in the proposed architecture and that of existing SIEM systems.

Findings

The results of the experiments showed that MA-SIEM systems are more efficient than existing SIEM systems because they leave the SIEM resources primarily dedicated to advanced correlation analysis. In addition, this paper takes into account realistic scenarios and use-cases and proposes a fully automated process for transferring normalized events in near real time to the SIEM server for further analysis using mobile agents.

Originality/value

The work provides new insights into the normalization security-related events using light mobile agents.

Details

Information & Computer Security, vol. 28 no. 1
Type: Research Article
ISSN: 2056-4961

Keywords

To view the access options for this content please click here
Article
Publication date: 3 October 2016

Hendi Yogi Prabowo and Kathie Cooper

Based on the authors’ study, the purpose of this paper is to better understand why corruption in the Indonesian public sector is so resilient from three behavioral…

Abstract

Purpose

Based on the authors’ study, the purpose of this paper is to better understand why corruption in the Indonesian public sector is so resilient from three behavioral perspectives: the Schemata Theory, the Corruption Normalization Theory and the Moral Development Theory.

Design/methodology/approach

This paper examines corruption trends and patterns in the Indonesian public sector in the past decade through examination of reports from various institutions as well as other relevant documents regarding corruption-related issues to gain a better understanding of the behavioral mechanisms underlying the adoption of corruption into organizational and individual schemata. This paper also uses expert interviews and focus group discussions with relevant experts in Indonesia and Australia on various corruption-related issues.

Findings

The authors establish that the rampaging corruption in the Indonesian public sector is an outcome of cumulative decision-making processes by the participants. Such a process is influenced by individual and organizational schemata to interpret problems and situations based on past knowledge and experience. The discussion in this paper highlights the mechanisms of corruption normalization used to sustain corruption networks especially in the Indonesian public sector which will be very difficult to break with conventional means such as detection and prosecution. Essentially, the entire process of normalization will cause moral degradation among public servants to the point where their actions are driven solely by the fear of punishment and expectation of personal benefits. The three pillars of institutionalization, rationalization and socialization strengthen one another to make the entire normalization structure so trivially resilient that short-term-oriented anti-corruption measures may not even put a dent in it. The normalization structure can be brought down only when it is continuously struck with sufficient force on its pillars. Corruption will truly perish from Indonesia only when the societal, organizational and individual schemata have been re-engineered to interpret it as an aberration and not as a norm.

Research limitations/implications

Due to the limited time and resources, the discussion on the normalization of corruption in Indonesia is focused on corruption within the Indonesian public institutions by interviewing anti-fraud professionals and scholars. A more complete picture of corruption normalization in Indonesia can be drawn from interviews with incarcerated corruption offenders from Indonesian public institutions.

Practical implications

This paper contributes to the development of corruption eradication strategy by deconstructing corruption normalization processes so that the existing resources can be allocated effectively and efficiently into areas that will result in long-term benefits.

Originality/value

This paper demonstrates how the seemingly small and insignificant behavioral factors may constitute “regenerative healing factor” for corruption in Indonesia.

Details

Journal of Financial Crime, vol. 23 no. 4
Type: Research Article
ISSN: 1359-0790

Keywords

To view the access options for this content please click here
Article
Publication date: 2 October 2017

Hendi Yogi Prabowo, Kathie Cooper, Jaka Sriyana and Muhammad Syamsudin

Based on the authors’ study, the purpose of this paper is to ascertain the best approach to mitigate corruption in the Indonesian public sector. To do so, the paper uses…

Abstract

Purpose

Based on the authors’ study, the purpose of this paper is to ascertain the best approach to mitigate corruption in the Indonesian public sector. To do so, the paper uses three behavioral perspectives: the Schemata Theory, the Corruption Normalization Theory and the Moral Development Theory.

Design/methodology/approach

This paper is part of the authors’ study to examine corruption patterns in Indonesia in the past 10 years through examination of reports from various institutions as well as other relevant documents addresses corruption-related issues to explore various options for mitigating corruption through behavioral re-engineering. For the purpose of gaining various perspectives on anti-corruption measures, this study also uses expert interviews and focus group discussions with relevant experts in Indonesia and Australia on various corruption-related issues.

Findings

The authors establish that despite the fall of the New Order regime nearly two decades ago, corruption remains entrenched within the post-Suharto Governments. The normalized corruption in Indonesia is a legacy of the New Order regime that shaped societal, organizational and individual schemata in Indonesia. The patrimonial style of leadership in particular within the regional governments resulted in increasing rent-seeking activities within the decentralized system. The leadership style is also believed to have been supporting the normalization of corruption within the public sector since the New Order era. The three-decade-old systematic normalization of corruption in the Indonesian public sector can only be changed by means of long and systematic de-normalization initiatives. To design the best intervention measures, decision makers must first identify multiple factors that constitute the three normalization pillars: institutionalization, rationalization and normalization. Measures such as periodical reviews of operational procedures, appointment of leaders with sound morality, anti-corruption education programs, administering “cultural shocks”, just to name a few, can be part of multifaceted strategies to bring down the normalization pillars.

Research limitations/implications

The discussion on the options for de-normalization of corruption in Indonesia is focused on corruption within the Indonesian public institutions by interviewing anti-fraud professionals and scholars. A better formulation of strategic approaches can be developed by means of interviews with incarcerated corruption offenders from the Indonesian public institutions.

Practical implications

This paper contributes to the development of corruption eradication strategy by suggesting options for de-normalizing corruption in the Indonesian public sector so that resources can be allocated more effectively and efficiently to mitigate the problem.

Originality/value

This paper highlights the importance of behavior-oriented approaches in mitigating corruption in the Indonesian public sector.

Details

Journal of Financial Crime, vol. 24 no. 4
Type: Research Article
ISSN: 1359-0790

Keywords

Content available
Article
Publication date: 17 July 2020

Mukesh Kumar and Palak Rehan

Social media networks like Twitter, Facebook, WhatsApp etc. are most commonly used medium for sharing news, opinions and to stay in touch with peers. Messages on twitter…

Abstract

Social media networks like Twitter, Facebook, WhatsApp etc. are most commonly used medium for sharing news, opinions and to stay in touch with peers. Messages on twitter are limited to 140 characters. This led users to create their own novel syntax in tweets to express more in lesser words. Free writing style, use of URLs, markup syntax, inappropriate punctuations, ungrammatical structures, abbreviations etc. makes it harder to mine useful information from them. For each tweet, we can get an explicit time stamp, the name of the user, the social network the user belongs to, or even the GPS coordinates if the tweet is created with a GPS-enabled mobile device. With these features, Twitter is, in nature, a good resource for detecting and analyzing the real time events happening around the world. By using the speed and coverage of Twitter, we can detect events, a sequence of important keywords being talked, in a timely manner which can be used in different applications like natural calamity relief support, earthquake relief support, product launches, suspicious activity detection etc. The keyword detection process from Twitter can be seen as a two step process: detection of keyword in the raw text form (words as posted by the users) and keyword normalization process (reforming the users’ unstructured words in the complete meaningful English language words). In this paper a keyword detection technique based upon the graph, spanning tree and Page Rank algorithm is proposed. A text normalization technique based upon hybrid approach using Levenshtein distance, demetaphone algorithm and dictionary mapping is proposed to work upon the unstructured keywords as produced by the proposed keyword detector. The proposed normalization technique is validated using the standard lexnorm 1.2 dataset. The proposed system is used to detect the keywords from Twiter text being posted at real time. The detected and normalized keywords are further validated from the search engine results at later time for detection of events.

Details

Applied Computing and Informatics, vol. 17 no. 2
Type: Research Article
ISSN: 2634-1964

Keywords

To view the access options for this content please click here
Article
Publication date: 18 September 2017

Rodrigo Costas, Antonio Perianes-Rodríguez and Javier Ruiz-Castillo

The introduction of “altmetrics” as new tools to analyze scientific impact within the reward system of science has challenged the hegemony of citations as the predominant…

Abstract

Purpose

The introduction of “altmetrics” as new tools to analyze scientific impact within the reward system of science has challenged the hegemony of citations as the predominant source for measuring scientific impact. Mendeley readership has been identified as one of the most important altmetric sources, with several features that are similar to citations. The purpose of this paper is to perform an in-depth analysis of the differences and similarities between the distributions of Mendeley readership and citations across fields.

Design/methodology/approach

The authors analyze two issues by using in each case a common analytical framework for both metrics: the shape of the distributions of readership and citations, and the field normalization problem generated by differences in citation and readership practices across fields. In the first issue the authors use the characteristic scores and scales method, and in the second the measurement framework introduced in Crespo et al. (2013).

Findings

There are three main results. First, the citations and Mendeley readership distributions exhibit a strikingly similar degree of skewness in all fields. Second, the results on “exchange rates (ERs)” for Mendeley readership empirically supports the possibility of comparing readership counts across fields, as well as the field normalization of readership distributions using ERs as normalization factors. Third, field normalization using field mean readerships as normalization factors leads to comparably good results.

Originality/value

These findings open up challenging new questions, particularly regarding the possibility of obtaining conflicting results from field normalized citation and Mendeley readership indicators; this suggests the need for better determining the role of the two metrics in capturing scientific recognition.

Details

Aslib Journal of Information Management, vol. 69 no. 5
Type: Research Article
ISSN: 2050-3806

Keywords

1 – 10 of over 7000