Search results

1 – 10 of 633
Article
Publication date: 14 August 2024

Hyogon Kim, Eunmi Lee and Donghee Yoo

This study aims to provide measurable information that evaluates a company’s ESG performance based on the conceptual connection between ESG, non-financial elements of a company…

Abstract

Purpose

This study aims to provide measurable information that evaluates a company’s ESG performance based on the conceptual connection between ESG, non-financial elements of a company and the UN Sustainable Development Goals (SDGs) for resolving global issues.

Design/methodology/approach

A novel data processing method based on the BERT is presented and applied to analyze the changes and characteristics of SDG-related ESG texts from companies’ disclosures over the past decade. Specifically, ESG-related sentences are extracted from 93,277 Form 10-K filings disclosed between 2010 and 2022 and the similarity between these extracted sentences and SDGs statements is calculated through sentence transformers. A classifier is created by fine-tuning FinBERT, a financial domain-specific pre-trained language model, to classify the sentences into eight ESG classes.

Findings

The quantified results obtained from the classifier reveal several implications. First, it is observed that the trend of SDG-related ESG sentences shows a slow and steady increase over the past decade. Second, large-cap companies relatively have a greater amount of SDG-related ESG disclosures than small-cap companies. Third, significant events such as the COVID-19 pandemic greatly impact the changes in disclosure content.

Originality/value

This study presents a novel approach to textual analysis using neural network-based language models such as BERT. The results of this study provide meaningful information and insights for investors in socially responsible investment and sustainable investment and suggest that corporations need a long-term plan regarding ESG disclosures.

Details

Data Technologies and Applications, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 2514-9288

Keywords

Article
Publication date: 31 July 2024

Kyung-Shick Choi, Mohamed Chawki and Subhajit Basu

Exhibiting an unprecedented rate of advancement, technology’s progression over the past two decades has regrettably led to a disturbing increase in the distribution of child…

Abstract

Purpose

Exhibiting an unprecedented rate of advancement, technology’s progression over the past two decades has regrettably led to a disturbing increase in the distribution of child sexual abuse materials (CSAM) online. Compounded by the emergence of an underground cryptocurrency market, which serves as a primary distribution channel for these materials, the investigation and sanctioning of CSAM present a complex and unique set of challenges. The purpose of this study is to accurately diagnose the CSAM sentencing landscape and build a more comprehensive, evidence-based legal framework in penology.

Design/methodology/approach

The study collected and analyzed case details regarding CSAM sanctions in a database sourced from the US Department of Justice for 2020. Various factors were analyzed such as the victim’s age, offender typology and previous conviction, accompanied by an analysis of how these factors affect the sentence length.

Findings

The study found that the hierarchical agency-level interactions give insight into resource allocation prioritization, as well as confirming a close relationship between prior conviction history and sentence length, with the victim’s age inversely related to sentence length. Leveraging data-driven insights, the study paves the way for more targeted and effective sanctions, ultimately contributing to the broader goal of safeguarding children from online sexual exploitation.

Originality/value

The paper provides a critical analysis of the complex landscape surrounding CSAM distribution and judicial sentencing. By examining case details and leveraging data-driven insights, it offers valuable contributions to understanding the interplay between various factors such as victim age, offender typology and prior convictions on sentencing outcomes. This comprehensive approach not only sheds light on the dynamics of CSAM sanctions but also lays the groundwork for evidence-based legal frameworks in penology. Its originality lies in its nuanced examination of hierarchical agency interactions and its potential to inform more targeted interventions for safeguarding children from online exploitation.

Details

Journal of Aggression, Conflict and Peace Research, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 1759-6599

Keywords

Open Access
Article
Publication date: 17 April 2024

Elham Rostami and Fredrik Karlsson

This paper aims to investigate how congruent keywords are used in information security policies (ISPs) to pinpoint and guide clear actionable advice and suggest a metric for…

Abstract

Purpose

This paper aims to investigate how congruent keywords are used in information security policies (ISPs) to pinpoint and guide clear actionable advice and suggest a metric for measuring the quality of keyword use in ISPs.

Design/methodology/approach

A qualitative content analysis of 15 ISPs from public agencies in Sweden was conducted with the aid of Orange Data Mining Software. The authors extracted 890 sentences from these ISPs that included one or more of the analyzed keywords. These sentences were analyzed using the new metric – keyword loss of specificity – to assess to what extent the selected keywords were used for pinpointing and guiding actionable advice. Thus, the authors classified the extracted sentences as either actionable advice or other information, depending on the type of information conveyed.

Findings

The results show a significant keyword loss of specificity in relation to pieces of actionable advice in ISPs provided by Swedish public agencies. About two-thirds of the sentences in which the analyzed keywords were used focused on information other than actionable advice. Such dual use of keywords reduces the possibility of pinpointing and communicating clear, actionable advice.

Research limitations/implications

The suggested metric provides a means to assess the quality of how keywords are used in ISPs for different purposes. The results show that more research is needed on how keywords are used in ISPs.

Practical implications

The authors recommended that ISP designers exercise caution when using keywords in ISPs and maintain coherency in their use of keywords. ISP designers can use the suggested metrics to assess the quality of actionable advice in their ISPs.

Originality/value

The keyword loss of specificity metric adds to the few quantitative metrics available to assess ISP quality. To the best of the authors’ knowledge, applying this metric is a first attempt to measure the quality of actionable advice in ISPs.

Details

Information & Computer Security, vol. 32 no. 4
Type: Research Article
ISSN: 2056-4961

Keywords

Article
Publication date: 19 January 2024

Meng Zhu and Xiaolong Xu

Intent detection (ID) and slot filling (SF) are two important tasks in natural language understanding. ID is to identify the main intent of a paragraph of text. The goal of SF is…

Abstract

Purpose

Intent detection (ID) and slot filling (SF) are two important tasks in natural language understanding. ID is to identify the main intent of a paragraph of text. The goal of SF is to extract the information that is important to the intent from the input sentence. However, most of the existing methods use sentence-level intention recognition, which has the risk of error propagation, and the relationship between intention recognition and SF is not explicitly modeled. Aiming at this problem, this paper proposes a collaborative model of ID and SF for intelligent spoken language understanding called ID-SF-Fusion.

Design/methodology/approach

ID-SF-Fusion uses Bidirectional Encoder Representation from Transformers (BERT) and Bidirectional Long Short-Term Memory (BiLSTM) to extract effective word embedding and context vectors containing the whole sentence information respectively. Fusion layer is used to provide intent–slot fusion information for SF task. In this way, the relationship between ID and SF task is fully explicitly modeled. This layer takes the result of ID and slot context vectors as input to obtain the fusion information which contains both ID result and slot information. Meanwhile, to further reduce error propagation, we use word-level ID for the ID-SF-Fusion model. Finally, two tasks of ID and SF are realized by joint optimization training.

Findings

We conducted experiments on two public datasets, Airline Travel Information Systems (ATIS) and Snips. The results show that the Intent ACC score and Slot F1 score of ID-SF-Fusion on ATIS and Snips are 98.0 per cent and 95.8 per cent, respectively, and the two indicators on Snips dataset are 98.6 per cent and 96.7 per cent, respectively. These models are superior to slot-gated, SF-ID NetWork, stack-Prop and other models. In addition, ablation experiments were performed to further analyze and discuss the proposed model.

Originality/value

This paper uses word-level intent recognition and introduces intent information into the SF process, which is a significant improvement on both data sets.

Details

Data Technologies and Applications, vol. 58 no. 4
Type: Research Article
ISSN: 2514-9288

Keywords

Article
Publication date: 10 November 2023

Wagdi Rashad Ali Bin-Hady, Arif Ahmed Mohammed Hassan Al-Ahdal and Samia Khalifa Abdullah

English as a foreign langauge (EFL) students find it difficult to apply the theoretical knowledge they acquire on translation in the practical world. Therefore, this study…

Abstract

Purpose

English as a foreign langauge (EFL) students find it difficult to apply the theoretical knowledge they acquire on translation in the practical world. Therefore, this study explored if training in pretranslation techniques (PTTs) (syntactic parsing) as suggested by Almanna (2018) could improve the translation proficiency of Yemeni EFL students. Moreover, the study also assessed which of the PTTs the intervention helped to develop.

Design/methodology/approach

The study adopted a primarily experimental pre- and posttests research design, and the sample comprised of an intake class with 16 students enrolled in the fourth year, Bachelor in Education (B.Ed), Hadhramout University. Six participants were also interviewed to gather the students' perceptions on using PTTs.

Findings

Results showed that students' performance in translation developed significantly (Sig. = 0.002). All the six PTTs showed development, though subject, tense and aspect developed more significantly (Sig. = 0.034, 0.002, 0.001 respectively). Finally, the study reported students' positive perceptions on the importance of using PTTs before doing any translation tasks.

Originality/value

One of the recurrent errors that can be noticed in Yemeni EFL students' production is their inability to transfer the grammatical elements of sentences from L1 (Arabic) into L2 (English) or the visa versa. The researchers thought though translation is more than the syntactic transmission of one language into another, analyzing the elements of sentences using syntactic and semantic parsing can help students to produce acceptable texts in the target language. These claims would be proved or refuted after analyzing the experiment result of the present study.

Details

Journal of Applied Research in Higher Education, vol. 16 no. 4
Type: Research Article
ISSN: 2050-7003

Keywords

Article
Publication date: 29 December 2023

B. Vasavi, P. Dileep and Ulligaddala Srinivasarao

Aspect-based sentiment analysis (ASA) is a task of sentiment analysis that requires predicting aspect sentiment polarity for a given sentence. Many traditional techniques use…

Abstract

Purpose

Aspect-based sentiment analysis (ASA) is a task of sentiment analysis that requires predicting aspect sentiment polarity for a given sentence. Many traditional techniques use graph-based mechanisms, which reduce prediction accuracy and introduce large amounts of noise. The other problem with graph-based mechanisms is that for some context words, the feelings change depending on the aspect, and therefore it is impossible to draw conclusions on their own. ASA is challenging because a given sentence can reveal complicated feelings about multiple aspects.

Design/methodology/approach

This research proposed an optimized attention-based DL model known as optimized aspect and self-attention aware long short-term memory for target-based semantic analysis (OAS-LSTM-TSA). The proposed model goes through three phases: preprocessing, aspect extraction and classification. Aspect extraction is done using a double-layered convolutional neural network (DL-CNN). The optimized aspect and self-attention embedded LSTM (OAS-LSTM) is used to classify aspect sentiment into three classes: positive, neutral and negative.

Findings

To detect and classify sentiment polarity of the aspect using the optimized aspect and self-attention embedded LSTM (OAS-LSTM) model. The results of the proposed method revealed that it achieves a high accuracy of 95.3 per cent for the restaurant dataset and 96.7 per cent for the laptop dataset.

Originality/value

The novelty of the research work is the addition of two effective attention layers in the network model, loss function reduction and accuracy enhancement, using a recent efficient optimization algorithm. The loss function in OAS-LSTM is minimized using the adaptive pelican optimization algorithm, thus increasing the accuracy rate. The performance of the proposed method is validated on four real-time datasets, Rest14, Lap14, Rest15 and Rest16, for various performance metrics.

Details

Data Technologies and Applications, vol. 58 no. 3
Type: Research Article
ISSN: 2514-9288

Keywords

Content available
Article
Publication date: 12 August 2024

Courtney Hammond, Ashleigh S. Thatcher and Dean Fido

British Prime Minister, Rishi Sunak, recently introduced a “whole life order” sentence in response to sexually motivated or sadistic homicide offences (Gov.uk, 2023). Effectively…

Abstract

Purpose

British Prime Minister, Rishi Sunak, recently introduced a “whole life order” sentence in response to sexually motivated or sadistic homicide offences (Gov.uk, 2023). Effectively, this condemns the recipient to the remainder of their life in incarceration and renders rehabilitative interventions redundant. The purpose of this paper is to explore the literature pertaining to public pedagogy, definitions and convictions, and rehabilitative interventions – all in relation to those considered to have committed sexuallymotivated or sadistic murders, with emphasis on the implications of such.

Design/methodology/approach

Through this commentary, this paper explores the following points in line with existing literature: (a) public knowledge of the criminal justice system and those who have committed homicide offences, (b) the manner of defining and convicting sexually motivated and sadistic murders and (c) current access to rehabilitation intervention programmes.

Findings

This paper closes by recommending future research initiatives to deliver forensic-specific education for the general public as well as qualitative studies into the discourse around retribution to enable a conjunction between public concern and academic underpinning. Wider implications concerning public understandings, convictions, rehabilitations and politics are discussed.

Originality/value

To the best of the authors’ knowledge, this is the first paper that explores the practical and theoretical implications of imposing a whole life order on those charged with sadistic or sexual-motivated murders.

Details

Safer Communities, vol. 23 no. 4
Type: Research Article
ISSN: 1757-8043

Keywords

Article
Publication date: 4 January 2024

Zicheng Zhang

Advanced big data analysis and machine learning methods are concurrently used to unleash the value of the data generated by government hotline and help devise intelligent…

Abstract

Purpose

Advanced big data analysis and machine learning methods are concurrently used to unleash the value of the data generated by government hotline and help devise intelligent applications including automated process management, standard construction and more accurate dispatched orders to build high-quality government service platforms as more widely data-driven methods are in the process.

Design/methodology/approach

In this study, based on the influence of the record specifications of texts related to work orders generated by the government hotline, machine learning tools are implemented and compared to optimize classify dispatching tasks by performing exploratory studies on the hotline work order text, including linguistics analysis of text feature processing, new word discovery, text clustering and text classification.

Findings

The complexity of the content of the work order is reduced by applying more standardized writing specifications based on combining text grammar numerical features. So, order dispatch success prediction accuracy rate reaches 89.6 per cent after running the LSTM model.

Originality/value

The proposed method can help improve the current dispatching processes run by the government hotline, better guide staff to standardize the writing format of work orders, improve the accuracy of order dispatching and provide innovative support to the current mechanism.

Details

Data Technologies and Applications, vol. 58 no. 3
Type: Research Article
ISSN: 2514-9288

Keywords

Article
Publication date: 8 August 2024

Chih-Ming Chen and Xian-Xu Chen

This study aims to develop an associative text analyzer (ATA) to support users in quickly grasping and interpreting the content of large amounts of text through text association…

Abstract

Purpose

This study aims to develop an associative text analyzer (ATA) to support users in quickly grasping and interpreting the content of large amounts of text through text association recommendations, facilitating the identification of the contextual relationships between people, events, organization and locations for digital humanities. Additionally, by providing text summaries, the tool allows users to link between distant and close readings, thereby enabling more efficient exploration of related texts.

Design/methodology/approach

To verify the effectiveness of this tool in supporting exploration of historical texts, this study uses a counterbalanced design to compare the use of the digital humanities platform for Mr. Lo Chia-Lun’s Writings (DHP-LCLW) with and without the ATA to assist in exploring different aspects of text. The study investigated whether there were significant differences in effectiveness for exploring textual contexts and technological acceptance as well as used semi-structured in-depth interviews to understand the research participants’ viewpoints and experiences with the ATA.

Findings

The results of the experiment revealed that the effectiveness of text exploration using the DHP-LCLW with and without the ATA varied significantly depending on the topic of the text being explored. The DHP-LCLW with the ATA was found to be more suitable for exploring historical texts, while the DHP-LCLW without the ATA was more suitable for exploring educational texts. The DHP-LCLW with the DHP-LCLW was found to be significantly more useful in terms of perceived usefulness than the DHP-LCLW without the ATA, indicating that the research participants believed the ATA was more effective in helping them efficiently grasp the related texts and topics during text exploration.

Practical implications

The study’s practical implications lie in the development of an ATA for digital humanities, offering a valuable tool for efficiently exploring historical texts. The ATA enhances users’ ability to grasp and interpret large volumes of text, facilitating contextual relationship identification. Its practical utility is evident in the improved effectiveness of text exploration, particularly for historical content, as indicated by users’ perceived usefulness.

Originality/value

This study proposes an ATA for digital humanities, enhancing text exploration by offering association recommendations and efficient linking between distant and close readings. The study contributes by providing a specialized tool and demonstrating its perceived usefulness in facilitating efficient exploration of related texts in digital humanities.

Details

Aslib Journal of Information Management, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 2050-3806

Keywords

Article
Publication date: 5 July 2024

Nouhaila Bensalah, Habib Ayad, Abdellah Adib and Abdelhamid Ibn El Farouk

The paper aims to enhance Arabic machine translation (MT) by proposing novel approaches: (1) a dimensionality reduction technique for word embeddings tailored for Arabic text…

Abstract

Purpose

The paper aims to enhance Arabic machine translation (MT) by proposing novel approaches: (1) a dimensionality reduction technique for word embeddings tailored for Arabic text, optimizing efficiency while retaining semantic information; (2) a comprehensive comparison of meta-embedding techniques to improve translation quality; and (3) a method leveraging self-attention and Gated CNNs to capture token dependencies, including temporal and hierarchical features within sentences, and interactions between different embedding types. These approaches collectively aim to enhance translation quality by combining different embedding schemes and leveraging advanced modeling techniques.

Design/methodology/approach

Recent works on MT in general and Arabic MT in particular often pick one type of word embedding model. In this paper, we present a novel approach to enhance Arabic MT by addressing three key aspects. Firstly, we propose a new dimensionality reduction technique for word embeddings, specifically tailored for Arabic text. This technique optimizes the efficiency of embeddings while retaining their semantic information. Secondly, we conduct an extensive comparison of different meta-embedding techniques, exploring the combination of static and contextual embeddings. Through this analysis, we identify the most effective approach to improve translation quality. Lastly, we introduce a novel method that leverages self-attention and Gated convolutional neural networks (CNNs) to capture token dependencies, including temporal and hierarchical features within sentences, as well as interactions between different types of embeddings. Our experimental results demonstrate the effectiveness of our proposed approach in significantly enhancing Arabic MT performance. It outperforms baseline models with a BLEU score increase of 2 points and achieves superior results compared to state-of-the-art approaches, with an average improvement of 4.6 points across all evaluation metrics.

Findings

The proposed approaches significantly enhance Arabic MT performance. The dimensionality reduction technique improves the efficiency of word embeddings while preserving semantic information. Comprehensive comparison identifies effective meta-embedding techniques, with the contextualized dynamic meta-embeddings (CDME) model showcasing competitive results. Integration of Gated CNNs with the transformer model surpasses baseline performance, leveraging both architectures' strengths. Overall, these findings demonstrate substantial improvements in translation quality, with a BLEU score increase of 2 points and an average improvement of 4.6 points across all evaluation metrics, outperforming state-of-the-art approaches.

Originality/value

The paper’s originality lies in its departure from simply fine-tuning the transformer model for a specific task. Instead, it introduces modifications to the internal architecture of the transformer, integrating Gated CNNs to enhance translation performance. This departure from traditional fine-tuning approaches demonstrates a novel perspective on model enhancement, offering unique insights into improving translation quality without solely relying on pre-existing architectures. The originality in dimensionality reduction lies in the tailored approach for Arabic text. While dimensionality reduction techniques are not new, the paper introduces a specific method optimized for Arabic word embeddings. By employing independent component analysis (ICA) and a post-processing method, the paper effectively reduces the dimensionality of word embeddings while preserving semantic information which has not been investigated before especially for MT task.

Details

International Journal of Intelligent Computing and Cybernetics, vol. 17 no. 3
Type: Research Article
ISSN: 1756-378X

Keywords

1 – 10 of 633