Search results

1 – 10 of 301
Article
Publication date: 23 October 2007

Jung‐ran Park

The purpose of this paper is to present descriptive characteristics of the historical development of concept networks. The linguistic principles, mechanisms and motivations behind…

1047

Abstract

Purpose

The purpose of this paper is to present descriptive characteristics of the historical development of concept networks. The linguistic principles, mechanisms and motivations behind the evolution of concept networks are discussed. Implications emanating from the idea of the historical development of concept networks are discussed in relation to knowledge representation and organization schemes.

Design/methodology/approach

Natural language data including both speech and text are analyzed by examining discourse contexts in which a linguistic element such as a polysemy or homonym occurs. Linguistic literature on the historical development of concept networks is reviewed and analyzed.

Findings

Semantic sense relations in concept networks can be captured in a systematic and regular manner. The mechanism and impetus behind the process of concept network development suggest that semantic senses in concept networks are closely intertwined with pragmatic contexts and discourse structure. The interrelation and permeability of the semantic senses of concept networks are captured on a continuum scale based on three linguistic parameters: concrete shared semantic sense; discourse and text structure; and contextualized pragmatic information.

Research limitations/implications

Research findings signify the critical need for linking discourse structure and contextualized pragmatic information to knowledge representation and organization schemes.

Originality/value

The idea of linguistic characteristics, principles, motivation and mechanisms underlying the evolution of concept networks provides theoretical ground for developing a model for integrating knowledge representation and organization schemes with discourse structure and contextualized pragmatic information.

Details

Journal of Documentation, vol. 63 no. 6
Type: Research Article
ISSN: 0022-0418

Keywords

Article
Publication date: 25 July 2023

Aida Khakimova, Oleg Zolotarev and Sanjay Kaushal

Effective communication is crucial in the medical field where different stakeholders use various terminologies to describe and classify healthcare concepts such as ICD, SNOMED CT…

Abstract

Purpose

Effective communication is crucial in the medical field where different stakeholders use various terminologies to describe and classify healthcare concepts such as ICD, SNOMED CT, UMLS and MeSH, but the problem of polysemy can make natural language processing difficult. This study explores the contextual meanings of the term “pattern” in the biomedical literature, compares them to existing definitions, annotates a corpus for use in machine learning and proposes new definitions of terms such as “Syndrome, feature” and “pattern recognition.”

Design/methodology/approach

Entrez API was used to retrieve articles form PubMed for the study which assembled a corpus of 398 articles using a search query for the ambiguous term “pattern” in the titles or abstracts. The python NLTK library was used to extract the terms and their contexts, and an expert check was carried out. To understand the various meanings of the term, the contextual environment was analyzed by extracting the surrounding words of the term. The expert determined the appropriate size of the context for analysis to gain a more nuanced understanding of the different meanings of the term pattern.

Findings

The study found that the categories of meanings of the term “pattern” are broader in biomedical publications than in common definitions, and new categories have been emerging from the term's use in the biomedical field. The study highlights the importance of annotated corpora in advancing natural language processing techniques and provides valuable insights into the nuances of biomedical language.

Originality/value

The study's findings demonstrate the importance of exploring contextual meanings and proposing new definitions of terms in the biomedical field to improve natural language processing techniques.

Details

Kybernetes, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0368-492X

Keywords

Article
Publication date: 7 April 2015

Andreas Vlachidis and Douglas Tudhope

The purpose of this paper is to present the role and contribution of natural language processing techniques, in particular negation detection and word sense disambiguation in the…

Abstract

Purpose

The purpose of this paper is to present the role and contribution of natural language processing techniques, in particular negation detection and word sense disambiguation in the process of Semantic Annotation of Archaeological Grey Literature. Archaeological reports contain a great deal of information that conveys facts and findings in different ways. This kind of information is highly relevant to the research and analysis of archaeological evidence but at the same time can be a hindrance for the accurate indexing of documents with respect to positive assertions.

Design/methodology/approach

The paper presents a method for adapting the biomedicine oriented negation algorithm NegEx to the context of archaeology and discusses the evaluation results of the new modified negation detection module. A particular form of polysemy, which is inflicted by the definition of ontology classes and concerning the semantics of small finds in archaeology, is addressed by a domain specific word-sense disambiguation module.

Findings

The performance of the negation dection module is compared against a “Gold Standard” that consists of 300 manually annotated pages of archaeological excavation and evaluation reports. The evaluation results are encouraging, delivering overall 89 per cent precision, 80 per cent recall and 83 per cent F-measure scores. The paper addresses limitations and future improvements of the current work and highlights the need for ontological modelling to accommodate negative assertions.

Originality/value

The discussed NLP modules contribute to the aims of the OPTIMA pipeline delivering an innovative application of such methods in the context of archaeological reports for the semantic annotation of archaeological grey literature with respect to the CIDOC-CRM ontology.

Article
Publication date: 1 February 2013

Miguel Delattre and Rodolphe Ocler

The notion of professionalism is polysemic in nature. This paper aims at analysing the development of this notion and identifying its components. The paper examines the process…

527

Abstract

Purpose

The notion of professionalism is polysemic in nature. This paper aims at analysing the development of this notion and identifying its components. The paper examines the process that leads to the development of professionalism based on the interaction between the actors and the organization. These social interactions, grounded in an organizational environment, reveal the tensions that such interactions expose.

Design/methodology/approach

This paper draws on qualitative methodology using semi‐directive interviews and provides an example of application.

Findings

This paper focuses on how professionalism can be developed, generating acts, implementing actions, and using resources.

Research limitations/implications

This paper presents one example in a specific field and should be put in perspective with other examples and fields.

Originality/value

This paper clarifies the notion of professionalism and identifies specific elements that have to be taken into account when developing it.

Details

Society and Business Review, vol. 8 no. 1
Type: Research Article
ISSN: 1746-5680

Keywords

Article
Publication date: 22 February 2011

Lin‐Chih Chen

Term suggestion is a very useful information retrieval technique that tries to suggest relevant terms for users' queries, to help advertisers find more appropriate terms relevant…

Abstract

Purpose

Term suggestion is a very useful information retrieval technique that tries to suggest relevant terms for users' queries, to help advertisers find more appropriate terms relevant to their target market. This paper aims to focus on the problem of using several semantic analysis methods to implement a term suggestion system.

Design/methodology/approach

Three semantic analysis techniques are adopted – latent semantic indexing (LSI), probabilistic latent semantic indexing (PLSI), and a keyword relationship graph (KRG) – to implement a term suggestion system.

Findings

This paper shows that using multiple semantic analysis techniques can give significant performance improvements.

Research limitations/implications

The suggested terms returned from the system may be out of date, since the system uses a batch processing mode to update the training parameter.

Originality/value

The paper shows that the benefit of the techniques is to overcome the problems of synonymy and polysemy over the information retrieval field, by using a vector space model. Moreover, an intelligent stopping strategy is proposed to save the required number of iterations for probabilistic latent semantic indexing.

Details

Online Information Review, vol. 35 no. 1
Type: Research Article
ISSN: 1468-4527

Keywords

Article
Publication date: 13 September 2018

Yaghoub Norouzi and Hoda Homavandi

The purpose of this paper is to investigate image search and retrieval problems in selected search engines in relation to Persian writing style challenges.

Abstract

Purpose

The purpose of this paper is to investigate image search and retrieval problems in selected search engines in relation to Persian writing style challenges.

Design/methodology/approach

This study is an applied one, and to answer the questions the authors used an evaluative research method. The aim of the research is to explore the morphological and semantic problems of Persian language in connection with image search and retrieval among the three major and widespread search engines: Google, Yahoo and Bing. In order to collect the data, a checklist designed by the researcher was used and then the data were analyzed by descriptive and inferential statistics.

Findings

The results indicate that Google, Yahoo and Bing search engines do not pay enough attention to morphological and semantic features of Persian language in image search and retrieval. This research reveals that six groups of Persian language features include derived words, derived/compound words, Persian and Arabic Plural words, use of dotted T and the use of spoken language and polysemy, which are the major problems in this area. In addition, the results suggest that Google is the best search engine of all in terms of compatibility with Persian language features.

Originality/value

This study investigated some new aspects of the above-mentioned subject through combining morphological and semantic aspects of Persian language with image search and retrieval. Therefore, this study is an interdisciplinary research, the results of which would help both to offer some solutions and to carry out similar research on this subject area. This study will also fill a gap in research studies conducted so far in this area in Farsi language, especially in image search and retrieval. Moreover, findings of this study can help to bridge the gap between the user’s questions and search engines (systems) retrievals. In addition, the methodology of this paper provides a framework for further research on image search and retrieval in databases and search engines.

Details

Online Information Review, vol. 42 no. 6
Type: Research Article
ISSN: 1468-4527

Keywords

Article
Publication date: 23 October 2009

Ching‐Chieh Kiu and Chien‐Sing Lee

The purpose of this paper is to present an automated ontology mapping and merging algorithm, namely OntoDNA, which employs data mining techniques (FCA, SOM, K‐means) to resolve

Abstract

Purpose

The purpose of this paper is to present an automated ontology mapping and merging algorithm, namely OntoDNA, which employs data mining techniques (FCA, SOM, K‐means) to resolve ontological heterogeneities among distributed data sources in organizational memory and subsequently generate a merged ontology to facilitate resource retrieval from distributed resources for organizational decision making.

Design/methodology/approach

The OntoDNA employs unsupervised data mining techniques (FCA, SOM, K‐means) to resolve ontological heterogeneities to integrate distributed data sources in organizational memory. Unsupervised methods are needed as an alternative in the absence of prior knowledge for managing this knowledge. Given two ontologies that are to be merged as the input, the ontologies' conceptual pattern is discovered using FCA. Then, string normalizations are applied to transform their attributes in the formal context prior to lexical similarity mapping. Mapping rules are applied to reconcile the attributes. Subsequently, SOM and K‐means are applied for semantic similarity mapping based on the conceptual pattern discovered in the formal context to reduce the problem size of the SOM clusters as validated by the Davies‐Bouldin index. The mapping rules are then applied to discover semantic similarity between ontological concepts in the clusters and the ontological concepts of the target ontology are updated to the source ontology based on the merging rules. Merged ontology in a concept lattice is formed.

Findings

In experimental comparisons between PROMPT and OntoDNA ontology mapping and merging tool based on precision, recall and f‐measure, average mapping results for OntoDNA is 95.97 percent compared to PROMPT's 67.24 percent. In terms of recall, OntoDNA outperforms PROMPT on all the paired ontology except for one paired ontology. For the merging of one paired ontology, PROMPT fails to identify the mapping elements. OntoDNA significantly outperforms PROMPT due to the utilization of FCA in the OntoDNA to capture attributes and the inherent structural relationships among concepts. Better performance in OntoDNA is due to the following reasons. First, semantic problems such as synonymy and polysemy are resolved prior to contextual clustering. Second, unsupervised data mining techniques (SOM and K‐means) have reduced problem size. Third, string matching performs better than PROMPT's linguistic‐similarity matching in addressing semantic heterogeneity, in context it also contributes to the OntoDNA results. String matching resolves concept names based on similarity between concept names in each cluster for ontology mapping. Linguistic‐similarity matching resolves concept names based on concept‐representation structure and relations between concepts for ontology mapping.

Originality/value

The OntoDNA automates ontology mapping and merging without the need of any prior knowledge to generate a merged ontology. String matching is shown to perform better than linguistic‐similarity matching in resolving concept names. The OntoDNA will be valuable for organizations interested in merging ontologies from distributed or different organizational memories. For example, an organization might want to merge their organization‐specific ontologies with community standard ontologies.

Details

VINE, vol. 39 no. 4
Type: Research Article
ISSN: 0305-5728

Keywords

Article
Publication date: 19 January 2024

Julia Viezzer Baretta, Micheline Gaia Hoffmann, Luciana Militao and Josivania Silva Farias

The purpose of this study is examined whether coproduction appears spontaneously in the literature on public sector innovation and governance, the citizens’ role in coproduction…

Abstract

Purpose

The purpose of this study is examined whether coproduction appears spontaneously in the literature on public sector innovation and governance, the citizens’ role in coproduction and the implication of citizens’ participation in the governance of innovation networks.

Design/methodology/approach

The review complied with preferred reporting items for systematic reviews and meta-analyses (PRISMA) protocol. The search was performed in the Ebsco, Scopus and WOS databases. The authors analyzed 47 papers published from 2017 to 2022. Thematic and content analysis were adopted, supported by MAXQDA.

Findings

The papers recognize the importance of the citizens in public innovation. However, only 20% discuss coproduction, evidencing the predominance of governance concepts related to interorganizational collaborations – but not necessarily to citizen engagement. The authors also verified the existence of polysemy regarding the concept of governance associated with public innovation, predominating the term “collaborative governance.”

Research limitations/implications

The small emphasis on “co-production” may result from the search strategy, which deliberately did not include it as a descriptor, considering the research purpose. One can consider this choice a limitation.

Practical implications

Considering collaborative governance as a governing arrangement where public agencies directly engage nonstate stakeholders in a collective decision-making process that is formal, consensus-oriented and deliberative (Ansell and Gash, 2007), the forum where the citizen is supposed to be engaged should be initiated by public agencies or institutions and formally organized, as suggested by Österberg and Qvist (2020) and Campomori and Casula (2022). These notions can be useful for public managers concerning their role and how the forums structure should be to promote collaboration and the presence of innovation assets needed to make the process fruitful (Crosby et al., 2017).

Originality/value

Despite the collaborative nature of public innovation, the need for adequate governance characteristics, and the importance of citizens in the innovative process, most studies generically address collaborative relationships, focusing on interorganizational collaboration, with little focus on specific actors such as citizens in the governance of public innovation. Thus, it is assumed that the literature that discusses public innovation and governance includes the discussion of coproduction. The originality and contribution of this study is to verify this assumption.

Details

International Journal of Innovation Science, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 1757-2223

Keywords

Article
Publication date: 5 June 2019

Markus Wohlfeil, Anthony Patterson and Stephen J. Gould

This paper aims to explain a celebrity’s deep resonance with consumers by unpacking the individual constituents of a celebrity’s polysemic appeal. While celebrities are…

3535

Abstract

Purpose

This paper aims to explain a celebrity’s deep resonance with consumers by unpacking the individual constituents of a celebrity’s polysemic appeal. While celebrities are traditionally theorised as unidimensional semiotic receptacles of cultural meaning, the authors conceptualise them here instead as human beings/performers with a multi-constitutional, polysemic consumer appeal.

Design/methodology/approach

Supporting evidence is drawn from autoethnographic data collected over a total period of 25 months and structured through a hermeneutic analysis.

Findings

In rehumanising the celebrity, the study finds that each celebrity offers the individual consumer a unique and very personal parasocial appeal as the performer, the private person behind the public performer, the tangible manifestation of either through products and the social link to other consumers. The stronger these constituents, individually or symbiotically, appeal to the consumer’s personal desires, the more s/he feels emotionally attached to this particular celebrity.

Research limitations/implications

Although using autoethnography means that the breadth of collected data is limited, the depth of insight this approach garners sufficiently unpacks the polysemic appeal of celebrities to consumers.

Practical implications

The findings encourage talent agents, publicists and marketing managers to reconsider underlying assumptions in their talent management and/or celebrity endorsement practices.

Originality/value

While prior research on celebrity appeal has tended to enshrine celebrities in a “dehumanised” structuralist semiosis, which erases the very idea of individualised consumer meanings, this paper reveals the multi-constitutional polysemy of any particular celebrity’s personal appeal as a performer and human being to any particular consumer.

Details

European Journal of Marketing, vol. 53 no. 10
Type: Research Article
ISSN: 0309-0566

Keywords

Article
Publication date: 2 February 2015

Jiunn-Liang Guo, Hei-Chia Wang and Ming-Way Lai

The purpose of this paper is to develop a novel feature selection approach for automatic text classification of large digital documents – e-books of online library system. The…

Abstract

Purpose

The purpose of this paper is to develop a novel feature selection approach for automatic text classification of large digital documents – e-books of online library system. The main idea mainly aims on automatically identifying the discourse features in order to improving the feature selection process rather than focussing on the size of the corpus.

Design/methodology/approach

The proposed framework intends to automatically identify the discourse segments within e-books and capture proper discourse subtopics that are cohesively expressed in discourse segments and treating these subtopics as informative and prominent features. The selected set of features is then used to train and perform the e-book classification task based on the support vector machine technique.

Findings

The evaluation of the proposed framework shows that identifying discourse segments and capturing subtopic features leads to better performance, in comparison with two conventional feature selection techniques: TFIDF and mutual information. It also demonstrates that discourse features play important roles among textual features, especially for large documents such as e-books.

Research limitations/implications

Automatically extracted subtopic features cannot be directly entered into FS process but requires control of the threshold.

Practical implications

The proposed technique has demonstrated the promised application of using discourse analysis to enhance the classification of large digital documents – e-books as against to conventional techniques.

Originality/value

A new FS technique is proposed which can inspect the narrative structure of large documents and it is new to the text classification domain. The other contribution is that it inspires the consideration of discourse information in future text analysis, by providing more evidences through evaluation of the results. The proposed system can be integrated into other library management systems.

Details

Program, vol. 49 no. 1
Type: Research Article
ISSN: 0033-0337

Keywords

1 – 10 of 301