Search results

1 – 10 of 327
Article
Publication date: 25 January 2023

Ashutosh Kumar and Aakanksha Sharaff

The purpose of this study was to design a multitask learning model so that biomedical entities can be extracted without having any ambiguity from biomedical texts.

Abstract

Purpose

The purpose of this study was to design a multitask learning model so that biomedical entities can be extracted without having any ambiguity from biomedical texts.

Design/methodology/approach

In the proposed automated bio entity extraction (ABEE) model, a multitask learning model has been introduced with the combination of single-task learning models. Our model used Bidirectional Encoder Representations from Transformers to train the single-task learning model. Then combined model's outputs so that we can find the verity of entities from biomedical text.

Findings

The proposed ABEE model targeted unique gene/protein, chemical and disease entities from the biomedical text. The finding is more important in terms of biomedical research like drug finding and clinical trials. This research aids not only to reduce the effort of the researcher but also to reduce the cost of new drug discoveries and new treatments.

Research limitations/implications

As such, there are no limitations with the model, but the research team plans to test the model with gigabyte of data and establish a knowledge graph so that researchers can easily estimate the entities of similar groups.

Practical implications

As far as the practical implication concerned, the ABEE model will be helpful in various natural language processing task as in information extraction (IE), it plays an important role in the biomedical named entity recognition and biomedical relation extraction and also in the information retrieval task like literature-based knowledge discovery.

Social implications

During the COVID-19 pandemic, the demands for this type of our work increased because of the increase in the clinical trials at that time. If this type of research has been introduced previously, then it would have reduced the time and effort for new drug discoveries in this area.

Originality/value

In this work we proposed a novel multitask learning model that is capable to extract biomedical entities from the biomedical text without any ambiguity. The proposed model achieved state-of-the-art performance in terms of precision, recall and F1 score.

Details

Data Technologies and Applications, vol. 57 no. 2
Type: Research Article
ISSN: 2514-9288

Keywords

Article
Publication date: 21 December 2020

Sudha Cheerkoot-Jalim and Kavi Kumar Khedo

This work shows the results of a systematic literature review on biomedical text mining. The purpose of this study is to identify the different text mining approaches used in…

Abstract

Purpose

This work shows the results of a systematic literature review on biomedical text mining. The purpose of this study is to identify the different text mining approaches used in different application areas of the biomedical domain, the common tools used and the challenges of biomedical text mining as compared to generic text mining algorithms. This study will be of value to biomedical researchers by allowing them to correlate text mining approaches to specific biomedical application areas. Implications for future research are also discussed.

Design/methodology/approach

The review was conducted following the principles of the Kitchenham method. A number of research questions were first formulated, followed by the definition of the search strategy. The papers were then selected based on a list of assessment criteria. Each of the papers were analyzed and information relevant to the research questions were extracted.

Findings

It was found that researchers have mostly harnessed data sources such as electronic health records, biomedical literature, social media and health-related forums. The most common text mining technique was natural language processing using tools such as MetaMap and Unstructured Information Management Architecture, alongside the use of medical terminologies such as Unified Medical Language System. The main application area was the detection of adverse drug events. Challenges identified included the need to deal with huge amounts of text, the heterogeneity of the different data sources, the duality of meaning of words in biomedical text and the amount of noise introduced mainly from social media and health-related forums.

Originality/value

To the best of the authors’ knowledge, other reviews in this area have focused on either specific techniques, specific application areas or specific data sources. The results of this review will help researchers to correlate most relevant and recent advances in text mining approaches to specific biomedical application areas by providing an up-to-date and holistic view of work done in this research area. The use of emerging text mining techniques has great potential to spur the development of innovative applications, thus considerably impacting on the advancement of biomedical research.

Details

Journal of Knowledge Management, vol. 25 no. 3
Type: Research Article
ISSN: 1367-3270

Keywords

Article
Publication date: 29 April 2020

Yongjun Zhu, Woojin Jung, Fei Wang and Chao Che

Drug repurposing involves the identification of new applications for existing drugs. Owing to the enormous rise in the costs of pharmaceutical R&D, several pharmaceutical…

Abstract

Purpose

Drug repurposing involves the identification of new applications for existing drugs. Owing to the enormous rise in the costs of pharmaceutical R&D, several pharmaceutical companies are leveraging repurposing strategies. Parkinson's disease is the second most common neurodegenerative disorder worldwide, affecting approximately 1–2 percent of the human population older than 65 years. This study proposes a literature-based drug repurposing strategy in Parkinson's disease.

Design/methodology/approach

The literature-based drug repurposing strategy proposed herein combined natural language processing, network science and machine learning methods for analyzing unstructured text data and producing actional knowledge for drug repurposing. The approach comprised multiple computational components, including the extraction of biomedical entities and their relationships, knowledge graph construction, knowledge representation learning and machine learning-based prediction.

Findings

The proposed strategy was used to mine information pertaining to the mechanisms of disease treatment from known treatment relationships and predict drugs for repurposing against Parkinson's disease. The F1 score of the best-performing method was 0.97, indicating the effectiveness of the proposed approach. The study also presents experimental results obtained by combining the different components of the strategy.

Originality/value

The drug repurposing strategy proposed herein for Parkinson's disease is distinct from those existing in the literature in that the drug repurposing pipeline includes components of natural language processing, knowledge representation and machine learning for analyzing the scientific literature. The results of the study provide important and valuable information to researchers studying different aspects of Parkinson's disease.

Article
Publication date: 22 August 2022

Tatsawan Timakum, Min Song and Giyeong Kim

This study aimed to examine the mental health information entities and associations between the biomedical, psychological and social domains of bipolar disorder (BD) by analyzing…

Abstract

Purpose

This study aimed to examine the mental health information entities and associations between the biomedical, psychological and social domains of bipolar disorder (BD) by analyzing social media data and scientific literature.

Design/methodology/approach

Reddit posts and full-text papers from PubMed Central (PMC) were collected. The text analysis was used to create a psychological dictionary. The text mining tools were applied to extract BD entities and their relationships in the datasets using a dictionary- and rule-based approach. Lastly, social network analysis and visualization were employed to view the associations.

Findings

Mental health information on the drug side effects entity was detected frequently in both datasets. In the affective category, the most frequent entities were “depressed” and “severe” in the social media and PMC data, respectively. The social and personal concerns entities that related to friends, family, self-attitude and economy were found repeatedly in the Reddit data. The relationships between the biomedical and psychological processes, “afraid” and “Lithium” and “schizophrenia” and “suicidal,” were identified often in the social media and PMC data, respectively.

Originality/value

Mental health information has been increasingly sought-after, and BD is a mental illness with complicated factors in the clinical picture. This paper has made an original contribution to comprehending the biological, psychological and social factors of BD. Importantly, these results have highlighted the benefit of mental health informatics that can be analyzed in the laboratory and social media domains.

Details

Aslib Journal of Information Management, vol. 75 no. 3
Type: Research Article
ISSN: 2050-3806

Keywords

Article
Publication date: 3 February 2023

Huyen Nguyen, Haihua Chen, Jiangping Chen, Kate Kargozari and Junhua Ding

This study aims to evaluate a method of building a biomedical knowledge graph (KG).

Abstract

Purpose

This study aims to evaluate a method of building a biomedical knowledge graph (KG).

Design/methodology/approach

This research first constructs a COVID-19 KG on the COVID-19 Open Research Data Set, covering information over six categories (i.e. disease, drug, gene, species, therapy and symptom). The construction used open-source tools to extract entities, relations and triples. Then, the COVID-19 KG is evaluated on three data-quality dimensions: correctness, relatedness and comprehensiveness, using a semiautomatic approach. Finally, this study assesses the application of the KG by building a question answering (Q&A) system. Five queries regarding COVID-19 genomes, symptoms, transmissions and therapeutics were submitted to the system and the results were analyzed.

Findings

With current extraction tools, the quality of the KG is moderate and difficult to improve, unless more efforts are made to improve the tools for entity extraction, relation extraction and others. This study finds that comprehensiveness and relatedness positively correlate with the data size. Furthermore, the results indicate the performances of the Q&A systems built on the larger-scale KGs are better than the smaller ones for most queries, proving the importance of relatedness and comprehensiveness to ensure the usefulness of the KG.

Originality/value

The KG construction process, data-quality-based and application-based evaluations discussed in this paper provide valuable references for KG researchers and practitioners to build high-quality domain-specific knowledge discovery systems.

Details

Information Discovery and Delivery, vol. 51 no. 4
Type: Research Article
ISSN: 2398-6247

Keywords

Article
Publication date: 26 August 2022

Satanu Ghosh and Kun Lu

The purpose of this paper is to present a preliminary work on extracting band gap information of materials from academic papers. With increasing demand for renewable energy, band…

Abstract

Purpose

The purpose of this paper is to present a preliminary work on extracting band gap information of materials from academic papers. With increasing demand for renewable energy, band gap information will help material scientists design and implement novel photovoltaic (PV) cells.

Design/methodology/approach

The authors collected 1.44 million titles and abstracts of scholarly articles related to materials science, and then filtered the collection to 11,939 articles that potentially contain relevant information about materials and their band gap values. ChemDataExtractor was extended to extract information about PV materials and their band gap information. Evaluation was performed on randomly sampled information records of 415 papers.

Findings

The findings of this study show that the current system is able to correctly extract information for 51.32% articles, with partially correct extraction for 36.62% articles and incorrect for 12.04%. The authors have also identified the errors belonging to three main categories pertaining to chemical entity identification, band gap information and interdependency resolution. Future work will focus on addressing these errors to improve the performance of the system.

Originality/value

The authors did not find any literature to date on band gap information extraction from academic text using automated methods. This work is unique and original. Band gap information is of importance to materials scientists in applications such as solar cells, light emitting diodes and laser diodes.

Details

Aslib Journal of Information Management, vol. 75 no. 3
Type: Research Article
ISSN: 2050-3806

Keywords

Article
Publication date: 20 November 2009

Martin Hofman‐Apitius, Erfan Younesi and Vinod Kasam

The purpose of this paper is to demonstrate how the information extracted from scientific text can be directly used in support of life science research projects. In modern…

Abstract

Purpose

The purpose of this paper is to demonstrate how the information extracted from scientific text can be directly used in support of life science research projects. In modern digital‐based research and academic libraries, librarians should be able to support data discovery and organization of digital entities in order to foster research projects effectively; thus the paper aims to speculate that text mining and knowledge discovery tools could be of great assistance to librarians. Such tools simply enable librarians to overcome increasing complexity in the number as well as contents of scientific literature, especially in the emerging interdisciplinary fields of science. This paper seeks to present an example of how evidences extracted from scientific literature can be directly integrated into in silico disease models in support of drug discovery projects.

Design/methodology/approach

The application of text‐mining as well as knowledge discovery tools is explained in the form of a knowledge‐based workflow for drug target candidate identification. Moreover, an in silico experimentation framework is proposed for the enhancement of efficiency and productivity in the early steps of the drug discovery workflow.

Findings

The in silico experimentation workflow has been successfully applied to searching for hit and lead compounds in the World‐wide In Silico Docking On Malaria (WISDOM) project and to finding novel inhibitor candidates.

Practical implications

Direct extraction of biological information from text will ease the task of librarians in managing digital objects and supporting research projects. It is expected that textual data will play an increasingly important role in evidence‐based approaches taken by biomedical and translational researchers.

Originality/value

The proposed approach provides a practical example for the direct integration of text‐ and knowledge‐based data into life science research projects, with the emphasis on their application by academic and research libraries in support of scientific projects.

Details

Library Hi Tech, vol. 27 no. 4
Type: Research Article
ISSN: 0737-8831

Keywords

Article
Publication date: 13 January 2012

Carmen Galvez and Félix de Moya‐Anegón

Gene term variation is a shortcoming in text‐mining applications based on biomedical literature‐based knowledge discovery. The purpose of this paper is to propose a technique for…

Abstract

Purpose

Gene term variation is a shortcoming in text‐mining applications based on biomedical literature‐based knowledge discovery. The purpose of this paper is to propose a technique for normalizing gene names in biomedical literature.

Design/methodology/approach

Under this proposal, the normalized forms can be characterized as a unique gene symbol, defined as the official symbol or normalized name. The unification method involves five stages: collection of the gene term, using the resources provided by the Entrez Gene database; encoding of gene‐naming terms in a table or binary matrix; design of a parametrized finite‐state graph (P‐FSG); automatic generation of a dictionary; and matching based on dictionary look‐up to transform the gene mentions into the corresponding unified form.

Findings

The findings show that the approach yields a high percentage of recall. Precision is only moderately high, basically due to ambiguity problems between gene‐naming terms and words and abbreviations in general English.

Research limitations/implications

The major limitation of this study is that biomedical abstracts were analyzed instead of full‐text documents. The number of under‐normalization and over‐normalization errors is reduced considerably by limiting the realm of application to biomedical abstracts in a well‐defined domain.

Practical implications

The system can be used for practical tasks in biomedical literature mining. Normalized gene terms can be used as input to literature‐based gene clustering algorithms, for identifying hidden gene‐to‐disease, gene‐to‐gene and gene‐to‐literature relationships.

Originality/value

Few systems for gene term variation handling have been developed to date. The technique described performs gene name normalization by dictionary look‐up.

Details

Journal of Documentation, vol. 68 no. 1
Type: Research Article
ISSN: 0022-0418

Keywords

Article
Publication date: 22 October 2021

Na Pang, Li Qian, Weimin Lyu and Jin-Dong Yang

In computational chemistry, the chemical bond energy (pKa) is essential, but most pKa-related data are submerged in scientific papers, with only a few data that have been…

Abstract

Purpose

In computational chemistry, the chemical bond energy (pKa) is essential, but most pKa-related data are submerged in scientific papers, with only a few data that have been extracted by domain experts manually. The loss of scientific data does not contribute to in-depth and innovative scientific data analysis. To address this problem, this study aims to utilize natural language processing methods to extract pKa-related scientific data in chemical papers.

Design/methodology/approach

Based on the previous Bert-CRF model combined with dictionaries and rules to resolve the problem of a large number of unknown words of professional vocabulary, in this paper, the authors proposed an end-to-end Bert-CRF model with inputting constructed domain wordpiece tokens using text mining methods. The authors use standard high-frequency string extraction techniques to construct domain wordpiece tokens for specific domains. And in the subsequent deep learning work, domain features are added to the input.

Findings

The experiments show that the end-to-end Bert-CRF model could have a relatively good result and can be easily transferred to other domains because it reduces the requirements for experts by using automatic high-frequency wordpiece tokens extraction techniques to construct the domain wordpiece tokenization rules and then input domain features to the Bert model.

Originality/value

By decomposing lots of unknown words with domain feature-based wordpiece tokens, the authors manage to resolve the problem of a large amount of professional vocabulary and achieve a relatively ideal extraction result compared to the baseline model. The end-to-end model explores low-cost migration for entity and relation extraction in professional fields, reducing the requirements for experts.

Details

Data Technologies and Applications, vol. 56 no. 2
Type: Research Article
ISSN: 2514-9288

Keywords

Article
Publication date: 27 July 2022

Svetlozar Nestorov, Dinko Bačić, Nenad Jukić and Mary Malliaris

The purpose of this paper is to propose an extensible framework for extracting data set usage from research articles.

Abstract

Purpose

The purpose of this paper is to propose an extensible framework for extracting data set usage from research articles.

Design/methodology/approach

The framework uses a training set of manually labeled examples to identify word features surrounding data set usage references. Using the word features and general entity identifiers, candidate data sets are extracted and scored separately at the sentence and document levels. Finally, the extracted data set references can be verified by the authors using a web-based verification module.

Findings

This paper successfully addresses a significant gap in entity extraction literature by focusing on data set extraction. In the process, this paper: identified an entity-extraction scenario with specific characteristics that enable a multiphase approach, including a feasible author-verification step; defined the search space for word feature identification; defined scoring functions for sentences and documents; and designed a simple web-based author verification step. The framework is successfully tested on 178 articles authored by researchers from a large research organization.

Originality/value

Whereas previous approaches focused on completely automated large-scale entity recognition from text snippets, the proposed framework is designed for a longer, high-quality text, such as a research publication. The framework includes a verification module that enables the request validation of the discovered entities by the authors of the research publications. This module shares some similarities with general crowdsourcing approaches, but the target scenario increases the likelihood of meaningful author participation.

1 – 10 of 327