Search results

1 – 10 of over 17000
Article
Publication date: 28 January 2014

Kaisa Airo and Suvi Nenonen

– The purpose of this article is to review the use of linguistic methods such as narrative and discourse analysis in workplace management research.

Abstract

Purpose

The purpose of this article is to review the use of linguistic methods such as narrative and discourse analysis in workplace management research.

Design/methodology/approach

Ten journals are reviewed in a time period of six years between years 2004-2010. The journals are categorized into three linguistic methodological journals and seven journals on built environment. Additionally articles were gathered with search words of workplace management, discourse and narrative analysis. Out of the total 2,245 articles, 40 articles were considered to be relevant for this research.

Findings

The linguistic methods of narrative and discourse analysis are not recognized in the workplace management research in a comprehensive way by combining the research on built environment to the research on organization and culture. In the workplace management research methods of narrative and discourse analysis were applied to the processes of built environment. Additionally methods were applied to the research of space and place as means of communication and means of identity construction.

Practical implications

Linguistic approach would reveal underlying messages behind evident structures of workplace and give new insights on understanding and developing workplaces both in design and in use.

Originality/value

The linguistic methods of narrative and discourse analysis are rarely used in workplace management research and should be considered as a new resource in the research of WPM.

Details

Facilities, vol. 32 no. 1/2
Type: Research Article
ISSN: 0263-2772

Keywords

Article
Publication date: 24 October 2022

Chaoyu Zheng, Benhong Peng, Xuan Zhao, Guo Wei, Anxia Wan and Mu Yue

How to identify the critical success factors (CSFs) of public health emergencies (PHEs) is of great practical significance to carry out a scientific and effective risk assessment…

Abstract

Purpose

How to identify the critical success factors (CSFs) of public health emergencies (PHEs) is of great practical significance to carry out a scientific and effective risk assessment. The purpose of this paper is to address this issue.

Design/methodology/approach

In this paper, the authors propose a new approach to identify the CSFs by hesitant fuzzy linguistic set and a Decision-Making Trial and Evaluation Laboratory (DEMATEL) approach. First, a larger group of experts are clustered into three groups according to similarity degree. Then, the weight of each cluster is determined by the maximum consensus method, and the overall direct influence matrix is obtained by clustering with hesitant fuzzy linguistic weighted geometric (HFLWG) operators. Finally, the overall direct influence matrix is transformed into the crisp direct impact matrix by the score function, and 11 CSFs of PHEs are identified by using the extended DEMATEL method.

Findings

In addition, an example of PHEs shows that the approach has good identification applicability. The approach can be used to solve the problems of fuzziness and subjectivity in linguistic assessments, and it can be applied to identify the customer service framework with the linguistic assessments process in emergency management.

Originality/value

This paper extends the above DEMATEL method to study in the hesitant fuzzy linguistic context. This proposed hybrid approach has a wider application in the high-risk area where disasters frequently occur.

Details

Aslib Journal of Information Management, vol. 75 no. 6
Type: Research Article
ISSN: 2050-3806

Keywords

Article
Publication date: 15 January 2018

Meng-Xian Wang and Jian-qiang Wang

Online reviews increasingly present the characteristic of bidirectional communication with the advent of Web 2.0 era and tend to be asymmetrical and individualized in linguistic

Abstract

Purpose

Online reviews increasingly present the characteristic of bidirectional communication with the advent of Web 2.0 era and tend to be asymmetrical and individualized in linguistic information. The authors aim to develop a new linguistic conversion model that exploits the asymmetric and personalized information from online reviews to express such linguistic information. A new online recommendation approach is provided.

Design/methodology/approach

The necessity of new linguistic conversation model is elucidated, and a leverage factor is incorporated into the linguistic label of negative review to handle the asymmetry problems of linguistic scale. A possible value range of the leverage factor is studied. A new linguistic conversation model is accordingly established with an unbalanced linguistic label and a cloud model. The authors develop a new online recommendation approach based on several modules, such as initialization, conversion, user-clustering and recommendation models.

Findings

The unbalanced effect between negative and positive reviews is verified with real data and measured using indirect methods. A new online recommendation approach of electronic products is proposed and used as an illustrative example to prove the practicality, effectiveness and feasibility of the proposed approach.

Research limitations/implications

Due to the unavailable transaction information of customers, the limitation of this study is the effectiveness of the authors’ established recommendation system for platform or website cannot be verified.

Originality/value

In most existing studies, the influence of negative review is counterbalanced by positive review, and the unbalanced effect between negative and positive reviews is ignored. The negative review receives much attention from consumers and businesses. This study thus highlights the influence of negative review.

Details

Kybernetes, vol. 47 no. 7
Type: Research Article
ISSN: 0368-492X

Keywords

Article
Publication date: 13 July 2018

Mehtap Dursun and Nazli Goker

Neuromarketing, which is an interdisciplinary area, concentrates on evaluating consumers’ cognitive and emotional reactions to different marketing stimuli. In spite of advantages…

Abstract

Purpose

Neuromarketing, which is an interdisciplinary area, concentrates on evaluating consumers’ cognitive and emotional reactions to different marketing stimuli. In spite of advantages, neuromarketing still requires development and lacks a strong theoretical framework. Techniques that are used in neuromarketing studies have different superiorities and limitations, and thus, there is a need for the evaluation of the relevance of these techniques. The purpose of this study is to introduce a novel integrated approach for the neuromarketing research area.

Design/methodology/approach

The proposed approach combines 2-tuple linguistic representation model and data envelopment analysis to obtain the most efficient neuromarketing technique. It is apt to handle information provided by using both linguistic and numerical scales with multiple information sources. Furthermore, it allows managers to deal with heterogeneous information, without loss of information.

Findings

The proposed approach indicates that functional magnetic resonance imaging (fMRI) is the best performing neuromarketing technology. Recently, fMRI has been widely used in neuromarketing research. In spite of its high cost, its main superiorities are improved spatial and temporal resolutions. On the other hand, transcranial magnetic stimulation (TMS) and positron emission tomography (PET) are ranked at the bottom because of their poor resolutions and lower willingness of participants.

Originality/value

This paper proposes a common weight data envelopment analysis (DEA)-based decision model to cope with heterogeneous information collected by the experts to determine the best performing neuromarketing technology. The decision procedure enables the decision-makers to handle the problems of loss of information and multi-granularity by using the fusion of 2-tuple linguistic representation model and fuzzy information. Moreover, a DEA-based common weight model does not require subjective experts’ opinions to weight the evaluation criteria.

Article
Publication date: 22 March 2023

Yasmin Richards, Mark McClish and David Keatley

The purpose of this paper is to address the complexity of missing persons cases and highlight the linguistic differences that arise in this type of crime. Missing persons cases…

Abstract

Purpose

The purpose of this paper is to address the complexity of missing persons cases and highlight the linguistic differences that arise in this type of crime. Missing persons cases are typically very complex investigations. Without a body, crime scene forensics is not possible, and police are often left only with witness and suspect statements. Forensic linguistics methods may help investigators to prioritise or remove suspects. There are many competing approaches in forensic linguistic analysis; however, there is limited empirical research available on emerging methods.

Design/methodology/approach

This research investigates Statement Analysis, a recent development in linguistic analysis that has practical applications in criminal investigations. Real-world statements of individuals convicted of or found to be not guilty of their involvement in missing persons cases were used in the analyses. In addition, Behaviour Sequence Analysis was used to map the progressions of language in the suspects' statements.

Findings

Results indicated differences between the guilty and innocent individuals based on their language choices, for example, guilty suspects in missing [alive] cases were found more likely to use passive language and vague words because of high levels of cognitive load associated with the several types of guilty knowledge suspects in missing persons cases possess. Of particular interest is the use of untruthful words in the innocent suspects’ statements in missing [murdered] cases. While typically seen in deceptive statements, untruthful words in innocent statements may result because of false acquittals.

Originality/value

This research provides some support for Statement Analysis as a suitable approach to analysing linguistic statements in missing persons cases.

Details

Journal of Criminal Psychology, vol. 13 no. 4
Type: Research Article
ISSN: 2009-3829

Keywords

Article
Publication date: 1 December 2003

Da Ruan, Jun Liu and Roland Carchon

A flexible and realistic linguistic assessment approach is developed to provide a mathematical tool for synthesis and evaluation analysis of nuclear safeguards indicator…

Abstract

A flexible and realistic linguistic assessment approach is developed to provide a mathematical tool for synthesis and evaluation analysis of nuclear safeguards indicator information. This symbolic approach, which acts by the direct computation on linguistic terms, is established based on fuzzy set theory. More specifically, a lattice‐valued linguistic algebra model, which is based on a logical algebraic structure of the lattice implication algebra, is applied to represent imprecise information and to deal with both comparable and incomparable linguistic terms (i.e. non‐ordered linguistic values). Within this framework, some weighted aggregation functions introduced by Yager are analyzed and extended to treat these kinds of lattice‐value linguistic information. The application of these linguistic aggregation operators for managing nuclear safeguards indicator information is successfully demonstrated.

Details

Logistics Information Management, vol. 16 no. 6
Type: Research Article
ISSN: 0957-6053

Keywords

Article
Publication date: 1 August 2005

Carmen Galvez, Félix de Moya‐Anegón and Víctor H. Solana

To propose a categorization of the different conflation procedures at the two basic approaches, non‐linguistic and linguistic techniques, and to justify the application of…

1323

Abstract

Purpose

To propose a categorization of the different conflation procedures at the two basic approaches, non‐linguistic and linguistic techniques, and to justify the application of normalization methods within the framework of linguistic techniques.

Design/methodology/approach

Presents a range of term conflation methods, that can be used in information retrieval. The uniterm and multiterm variants can be considered equivalent units for the purposes of automatic indexing. Stemming algorithms, segmentation rules, association measures and clustering techniques are well evaluated non‐linguistic methods, and experiments with these techniques show a wide variety of results. Alternatively, the lemmatisation and the use of syntactic pattern‐matching, through equivalence relations represented in finite‐state transducers (FST), are emerging methods for the recognition and standardization of terms.

Findings

The survey attempts to point out the positive and negative effects of the linguistic approach and its potential as a term conflation method.

Originality/value

Outlines the importance of FSTs for the normalization of term variants.

Details

Journal of Documentation, vol. 61 no. 4
Type: Research Article
ISSN: 0022-0418

Keywords

Book part
Publication date: 9 November 2020

Siân Alsop, Virginia King, Genie Giaimo and Xiaoyu Xu

In this chapter, we explore uses of corpus linguistics within higher education research. Corpus linguistic approaches enable examination of large bodies of language data based on…

Abstract

In this chapter, we explore uses of corpus linguistics within higher education research. Corpus linguistic approaches enable examination of large bodies of language data based on computing power. These bodies of data, or corpora, facilitate investigation of the meaning of words in context. The semiautomated nature of such investigation helps researchers to identify and interpret language patterns that might otherwise be inaccessible through manual analysis. We illustrate potential uses of corpus linguistic approaches through four short case studies by higher education researchers, spanning educational contexts, disciplines and genres. These case studies are underpinned by discussion of the development of corpus linguistics as a field of investigation, including existing open corpora and corpus analysis tools. We give a flavour of how corpus linguistic techniques, in isolation or as part of a wider research approach, can be particularly helpful to higher education researchers who wish to investigate language data and its context.

Details

Theory and Method in Higher Education Research
Type: Book
ISBN: 978-1-80043-321-2

Keywords

Article
Publication date: 11 January 2016

Jindong Qin and Xinwang Liu

The purpose of this paper is to develop some 2-tuple linguistic aggregation operators based on Muirhead mean (MM), which is combined with multiple attribute group decision making…

Abstract

Purpose

The purpose of this paper is to develop some 2-tuple linguistic aggregation operators based on Muirhead mean (MM), which is combined with multiple attribute group decision making (MAGDM) and applied the proposed MAGDM model for supplier selection under 2-tuple linguistic environment.

Design/methodology/approach

The supplier selection problem can be regarded as a typical MAGDM problem, in which the decision information should be aggregated. In this paper, the authors investigate the MAGDM problems with 2-tuple linguistic information based on traditional MM operator. The MM operator is a well-known mean type aggregation operator, which has some particular advantages for aggregating multi-dimension arguments. The prominent characteristic of the MM operator is that it can capture the whole interrelationship among the multi-input arguments. Motivated by this idea, in this paper, the authors develop the 2-tuple linguistic Muirhead mean (2TLMM) operator and the 2-tuple linguistic dual Muirhead mean (2TLDMM) operator for aggregating the 2-tuple linguistic information, respectively. Some desirable properties and special cases are discussed in detail. Based on which, two approaches to deal with MAGDM problems under 2-tuple linguistic information environment are developed. Finally, a numerical example concerns the supplier selection problem is provided to illustrate the effectiveness and feasibility of the proposed methods.

Findings

The results show that the proposed can solve the MAGDM problems within the context of 2-tuple linguistic information, in which the attributes are existing interaction phenomenon. Some 2-tuple aggregation operators based on MM have been developed. A case study of supplier selection is provided to illustrate the effectiveness and feasibility of the proposed methods. The results show that the proposed methods are useful to aggregate the linguistic decision information in which the attributes are not independent so as to select the most suitable supplier.

Practical implications

The proposed methods can solve the 2-tuple linguistic MAGDM problem, in which the interactions exist among the attributes. Therefore, it can be used to supplier selection problems and other similar management decision problems.

Originality/value

The paper develop some 2-tuple aggregation operators based on MM, and further present two methods based on the proposed operators for solving MAGDM problems. It is useful to deal with multiple attribute interaction decision-making problems and suitable to solve a variety of management decision-making applications.

Details

Kybernetes, vol. 45 no. 1
Type: Research Article
ISSN: 0368-492X

Keywords

Article
Publication date: 21 August 2023

Yasmin Richards, Mark McClish and David Keatley

Understanding when an individual is being deceptive is an important part of police and criminal investigations. While investigators have developed multiple methods, the research…

Abstract

Purpose

Understanding when an individual is being deceptive is an important part of police and criminal investigations. While investigators have developed multiple methods, the research literature has yet to fully explore some of the newer applied techniques. This study aims to investigate statement analysis, a recent approach in forensic linguistic analysis that has been applied to criminal investigations.

Design/methodology/approach

Real-world statements of individuals exposed as deceptive or truthful were used in the analyses. A behaviour sequence analysis approach is used to provide a timeline analysis of the individuals’ statements.

Findings

Results indicate that sequential patterns are different in deceptive statements compared to truthful statements. For example, deceptive statements were more likely to include vague words and temporal lacunas, to convince investigators into believing that the suspect was not present when the crime occurred. The sample in this research did not use one deceptive indicator, instead, electing to frequently change the order of deceptive indicators. Gaps in deception were also noted, and there was common repetition found in both the deceptive and truthful statements. While gaps are predicted to occur in truthful statements to reflect an absence of deception, gaps occurring in the deceptive statements are likely due to cognitive load.

Originality/value

The current research provides more support for using statement analysis in real-world criminal cases.

Details

Journal of Criminal Psychology, vol. 13 no. 4
Type: Research Article
ISSN: 2009-3829

Keywords

1 – 10 of over 17000