Search results

1 – 10 of over 6000
Article
Publication date: 15 August 2023

Yi-Hung Liu and Sheng-Fong Chen

Whether automatically generated summaries of health social media can assist users in appropriately managing their diseases and ensuring better communication with health…

Abstract

Purpose

Whether automatically generated summaries of health social media can assist users in appropriately managing their diseases and ensuring better communication with health professionals becomes an important issue. This paper aims to develop a novel deep learning-based summarization approach for obtaining the most informative summaries from online patient reviews accurately and effectively.

Design/methodology/approach

This paper proposes a framework to generate summaries that integrates a domain-specific pre-trained embedding model and a deep neural extractive summary approach by considering content features, text sentiment, review influence and readability features. Representative health-related summaries were identified, and user judgements were analysed.

Findings

Experimental results on the three real-world health forum data sets indicate that awarding sentences without incorporating all the adopted features leads to declining summarization performance. The proposed summarizer significantly outperformed the comparison baseline. User judgement through the questionnaire provides realistic and concrete evidence of crucial features that remarkably influence patient forum review summaries.

Originality/value

This study contributes to health analytics and management literature by exploring users’ expressions and opinions through the health deep learning summarization model. The research also developed an innovative mindset to design summarization weighting methods from user-created content on health topics.

Details

The Electronic Library , vol. 41 no. 5
Type: Research Article
ISSN: 0264-0473

Keywords

Article
Publication date: 1 June 2015

Trung Tran and Dang Tuan Nguyen

The purpose of this paper is to enhance the quality of new reducing sentence in sentence-generation-based summarizing method by establishing consequence relationship between two…

Abstract

Purpose

The purpose of this paper is to enhance the quality of new reducing sentence in sentence-generation-based summarizing method by establishing consequence relationship between two action, state or process Vietnamese sentences.

Design/methodology/approach

First, types of pairs of Vietnamese sentences based on presupposition about the consequence relationship is classified: the verb indicating action or state at the first sentence is considered as the consequence of the verb indicating action, state or process at the second sentence. Then main predicates in Discourse Representation Structure – a logical form which represents the semantic of a given pair of sentences – is analyzed and inner- and inter-sentential relationships are determined. The next step is to generate the syntactic structure of the new reducing sentence. Finally, a combination with the built set of lexicons is done to complete the new meaning-summarizing Vietnamese sentence.

Findings

This method makes the new meaning-summarizing Vietnamese sentence satisfy two requirements: summarize the semantic of the given pair of Vietnamese sentences and have naturalism in common Vietnamese communication. In addition, it is possible to extend the method and apply for the purpose of summarizing the more complex Vietnamese paragraphs as well as paragraphs in other languages.

Research limitations/implications

At the first step, only inter-sentential consequence relationship is considered and this is applied to the limit types of pairs of Vietnamese sentences which have a simple structure.

Originality/value

This study presents improvements in sentence-generation-based summarization method to enhance the quality of new meaning-summarizing Vietnamese sentences. This method proves effective in summarizing the considered pairs of sentences.

Details

International Journal of Pervasive Computing and Communications, vol. 11 no. 2
Type: Research Article
ISSN: 1742-7371

Keywords

Article
Publication date: 22 November 2011

Brent Wenerstrom and Mehmed Kantardzic

Search engine users are faced with long lists of search results, each entry being of a varying degree of relevance. Often users' expectations based on the short text of a search…

Abstract

Purpose

Search engine users are faced with long lists of search results, each entry being of a varying degree of relevance. Often users' expectations based on the short text of a search result hold false expectations about the linked web page. This leads users to skip relevant information, missing valuable insights, and click on irrelevant web pages wasting time. The purpose of this paper is to propose a new summary generation technique, ReClose, which combines query‐independent and query‐biased summary techniques to improve the accuracy of users' expectations.

Design/methodology/approach

The authors tested the effectiveness of ReClose summaries against Google summaries by surveying 34 participants. Participants were randomly assigned to use one type of summary approach. Summary effectiveness was judged based on the accuracy of each user's expectations.

Findings

It was found that individuals using ReClose summaries showed a 10 per cent increase in the expectation accuracy over individuals using Google summaries, and therefore better user satisfaction.

Practical implications

The survey demonstrates the effectiveness of using ReClose summaries to improve the accuracy of user expectations.

Originality/value

This paper presents a novel summary generation technique called ReClose, a new approach to summary evaluation and improvements upon previously proposed summary generation techniques.

Details

International Journal of Web Information Systems, vol. 7 no. 4
Type: Research Article
ISSN: 1744-0084

Keywords

Article
Publication date: 3 November 2020

Jagroop Kaur and Jaswinder Singh

Normalization is an important step in all the natural language processing applications that are handling social media text. The text from social media poses a different kind of…

Abstract

Purpose

Normalization is an important step in all the natural language processing applications that are handling social media text. The text from social media poses a different kind of problems that are not present in regular text. Recently, a considerable amount of work has been done in this direction, but mostly in the English language. People who do not speak English code mixed the text with their native language and posted text on social media using the Roman script. This kind of text further aggravates the problem of normalizing. This paper aims to discuss the concept of normalization with respect to code-mixed social media text, and a model has been proposed to normalize such text.

Design/methodology/approach

The system is divided into two phases – candidate generation and most probable sentence selection. Candidate generation task is treated as machine translation task where the Roman text is treated as source language and Gurmukhi text is treated as the target language. Character-based translation system has been proposed to generate candidate tokens. Once candidates are generated, the second phase uses the beam search method for selecting the most probable sentence based on hidden Markov model.

Findings

Character error rate (CER) and bilingual evaluation understudy (BLEU) score are reported. The proposed system has been compared with Akhar software and RB\_R2G system, which are also capable of transliterating Roman text to Gurmukhi. The performance of the system outperforms Akhar software. The CER and BLEU scores are 0.268121 and 0.6807939, respectively, for ill-formed text.

Research limitations/implications

It was observed that the system produces dialectical variations of a word or the word with minor errors like diacritic missing. Spell checker can improve the output of the system by correcting these minor errors. Extensive experimentation is needed for optimizing language identifier, which will further help in improving the output. The language model also seeks further exploration. Inclusion of wider context, particularly from social media text, is an important area that deserves further investigation.

Practical implications

The practical implications of this study are: (1) development of parallel dataset containing Roman and Gurmukhi text; (2) development of dataset annotated with language tag; (3) development of the normalizing system, which is first of its kind and proposes translation based solution for normalizing noisy social media text from Roman to Gurmukhi. It can be extended for any pair of scripts. (4) The proposed system can be used for better analysis of social media text. Theoretically, our study helps in better understanding of text normalization in social media context and opens the doors for further research in multilingual social media text normalization.

Originality/value

Existing research work focus on normalizing monolingual text. This study contributes towards the development of a normalization system for multilingual text.

Details

International Journal of Intelligent Computing and Cybernetics, vol. 13 no. 4
Type: Research Article
ISSN: 1756-378X

Keywords

Article
Publication date: 1 January 1984

H. FARRENY and H. PRADE

This paper deals with a problem encountered in natural language generation which seems to have been largely ignored in the literature, that of generating non‐ambiguous (i.e…

Abstract

This paper deals with a problem encountered in natural language generation which seems to have been largely ignored in the literature, that of generating non‐ambiguous (i.e. discriminating) designations of objects in a given context, from a knowledge basis, which associates the properties and relations, concerning the objects present in the environment, with their respective formal labels. A search algorithm of type A is proposed, which always generates a discriminating designation when such a designation exists in terms of the available knowledge; for the evaluation the algorithm uses a subjective length function which takes into account the “intelligibility” of the designation. This work takes place in the SYROCO system, a dialogue interface for limited domains of discourse; the sentence interpretation as well as the sentence generation in SYROCO are briefly presented in the first part of this paper.

Details

Kybernetes, vol. 13 no. 1
Type: Research Article
ISSN: 0368-492X

Article
Publication date: 1 March 1995

Geert Adriaens

This paper describes ongoing developments in the LRE‐2 project SECC (Simplified English Grammar and Style Checker/Corrector). After a general description of the project, the…

Abstract

This paper describes ongoing developments in the LRE‐2 project SECC (Simplified English Grammar and Style Checker/Corrector). After a general description of the project, the approach to building the SECC writing tool is discussed. First, lingware issues are dealt with: resources used, technical implications of simplified grammar correction as machine translation, testing and evaluation issues. Next, we take a look at software issues, in particular the user interfaces. Finally, we discuss some open issues and future developments.

Details

Aslib Proceedings, vol. 47 no. 3
Type: Research Article
ISSN: 0001-253X

Article
Publication date: 29 November 2023

Tarun Jaiswal, Manju Pandey and Priyanka Tripathi

The purpose of this study is to investigate and demonstrate the advancements achieved in the field of chest X-ray image captioning through the utilization of dynamic convolutional…

Abstract

Purpose

The purpose of this study is to investigate and demonstrate the advancements achieved in the field of chest X-ray image captioning through the utilization of dynamic convolutional encoder–decoder networks (DyCNN). Typical convolutional neural networks (CNNs) are unable to capture both local and global contextual information effectively and apply a uniform operation to all pixels in an image. To address this, we propose an innovative approach that integrates a dynamic convolution operation at the encoder stage, improving image encoding quality and disease detection. In addition, a decoder based on the gated recurrent unit (GRU) is used for language modeling, and an attention network is incorporated to enhance consistency. This novel combination allows for improved feature extraction, mimicking the expertise of radiologists by selectively focusing on important areas and producing coherent captions with valuable clinical information.

Design/methodology/approach

In this study, we have presented a new report generation approach that utilizes dynamic convolution applied Resnet-101 (DyCNN) as an encoder (Verelst and Tuytelaars, 2019) and GRU as a decoder (Dey and Salemt, 2017; Pan et al., 2020), along with an attention network (see Figure 1). This integration innovatively extends the capabilities of image encoding and sequential caption generation, representing a shift from conventional CNN architectures. With its ability to dynamically adapt receptive fields, the DyCNN excels at capturing features of varying scales within the CXR images. This dynamic adaptability significantly enhances the granularity of feature extraction, enabling precise representation of localized abnormalities and structural intricacies. By incorporating this flexibility into the encoding process, our model can distil meaningful and contextually rich features from the radiographic data. While the attention mechanism enables the model to selectively focus on different regions of the image during caption generation. The attention mechanism enhances the report generation process by allowing the model to assign different importance weights to different regions of the image, mimicking human perception. In parallel, the GRU-based decoder adds a critical dimension to the process by ensuring a smooth, sequential generation of captions.

Findings

The findings of this study highlight the significant advancements achieved in chest X-ray image captioning through the utilization of dynamic convolutional encoder–decoder networks (DyCNN). Experiments conducted using the IU-Chest X-ray datasets showed that the proposed model outperformed other state-of-the-art approaches. The model achieved notable scores, including a BLEU_1 score of 0.591, a BLEU_2 score of 0.347, a BLEU_3 score of 0.277 and a BLEU_4 score of 0.155. These results highlight the efficiency and efficacy of the model in producing precise radiology reports, enhancing image interpretation and clinical decision-making.

Originality/value

This work is the first of its kind, which employs DyCNN as an encoder to extract features from CXR images. In addition, GRU as the decoder for language modeling was utilized and the attention mechanisms into the model architecture were incorporated.

Details

Data Technologies and Applications, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 2514-9288

Keywords

Article
Publication date: 1 May 2006

Shiyan Ou, Christopher S.G. Khoo and Dion H. Goh

The purpose of this research is to develop a method for automatic construction of multi‐document summaries of sets of news articles that might be retrieved by a web search engine…

Abstract

Purpose

The purpose of this research is to develop a method for automatic construction of multi‐document summaries of sets of news articles that might be retrieved by a web search engine in response to a user query.

Design/methodology/approach

Based on the cross‐document discourse analysis, an event‐based framework is proposed for integrating and organizing information extracted from different news articles. It has a hierarchical structure in which the summarized information is presented at the top level and more detailed information given at the lower levels. A tree‐view interface was implemented for displaying a multi‐document summary based on the framework. A preliminary user evaluation was performed by comparing the framework‐based summaries against the sentence‐based summaries.

Findings

In a small evaluation, all the human subjects preferred the framework‐based summaries to the sentence‐based summaries. It indicates that the event‐based framework is an effective way to summarize a set of news articles reporting an event or a series of relevant events.

Research limitations/implications

Limited to event‐based news articles only, not applicable to news critiques and other kinds of news articles. A summarization system based on the event‐based framework is being implemented.

Practical implications

Multi‐document summarization of news articles can adopt the proposed event‐based framework.

Originality/value

An event‐based framework for summarizing sets of news articles was developed and evaluated using a tree‐view interface for displaying such summaries.

Details

Aslib Proceedings, vol. 58 no. 3
Type: Research Article
ISSN: 0001-253X

Keywords

Book part
Publication date: 9 May 2011

Antony J. Puddephatt

G. H. Mead's social, developmental, and emergent conception of language and mind is a foundational assumption that is central to the interactionist tradition. However, the…

Abstract

G. H. Mead's social, developmental, and emergent conception of language and mind is a foundational assumption that is central to the interactionist tradition. However, the validity of this model has been challenged in recent years by theorists such as Albert Bergesen, who argues that recent advances in linguistics and cognitive psychology demonstrate that Mead's social theory of language learning and his theory of the social nature of mind are untenable. In light of these critiques, and drawing on Chomsky's debates with intellectuals such as Jean Piaget, John Searle, and Michael Tomasello, this chapter compares Chomsky's and Mead's theories of language and mind in terms of their assumptions about innateness and the nature and source of meaning. This comparison aims to address the major strengths and weaknesses in both models and shed light on how interactionists might frame these conceptual challenges in future theoretical and empirical research.

Details

Blue Ribbon Papers: Interactionism: The Emerging Landscape
Type: Book
ISBN: 978-0-85724-796-4

Book part
Publication date: 1 April 2011

Kristen L. McMaster, Kristen D. Ritchey and Erica Lembke

Many students with learning disabilities (LD) experience significant difficulties in developing writing proficiency. Early identification and intervention can prevent long-term…

Abstract

Many students with learning disabilities (LD) experience significant difficulties in developing writing proficiency. Early identification and intervention can prevent long-term writing problems. Early identification and intervention require reliable and valid writing assessments that can be used to identify students at risk and monitor their progress in response to intervention. One promising approach to assessing students' performance and progress in writing is Curriculum-Based Measurement (CBM). In this chapter, we provide an overview of CBM. Next, we describe a theoretical framework for writing development, and discuss implications of this framework for developing writing assessments. We then describe current efforts to develop a seamless and flexible approach to monitoring student progress in writing in the early elementary grades, and highlight important directions for future research. We end with a discussion of how teachers might eventually use CBM to make data-based decisions to provide effective individualized interventions for students who experience writing difficulties.

Details

Assessment and Intervention
Type: Book
ISBN: 978-0-85724-829-9

1 – 10 of over 6000