Search results
1 – 10 of 180C. Syamili and R.V. Rekha
The purpose of this study is to illustrate the development of ontology for the heroes of the ancient Greek mythology and religion. At present, a number of ontologies exist in…
Abstract
Purpose
The purpose of this study is to illustrate the development of ontology for the heroes of the ancient Greek mythology and religion. At present, a number of ontologies exist in different domains. However, ontologies of epics and myths are comparatively very few. To be more specific, nobody has developed such ontology for Greek mythology. This paper describes the attempts at developing ontology for Greek mythology to fill this gap.
Design/methodology/approach
This paper follows a combination of different methodologies, which is assumed to be a more effective way of developing ontology for mythology. It has adopted motivating scenario concept from Gruninger and Fox, developing cycle from Methontology and the analytico–synthetic approach from yet another methodology for ontology, and hence, it is a combination of three existing approaches.
Findings
A merged methodology has been adopted for this paper. The developed ontology was evaluated and made to meet with the information needs of its users. On the basis of the study, it was found that Greek mythology ontology could answer 62 per cent of the questions after first evaluation, i.e. 76 out of the 123 questions. The unanswered questions were analyzed in detail for further development of the ontology. The missing concepts were fed into the ontology; the ontology obtained after this stage was an exhaustive one.
Practical implications
This ontology will grow with time and can be used in semantic applications or e-learning modules related to the domain of Greek mythology.
Originality/value
This work is the first attempt to build ontology for Greek mythology. The approach is unique in that it has attempted to trace out the individual characteristics as well as the relationship between the characters described in the work.
Details
Keywords
Meriem Laifa and Djamila Mohdeb
This study provides an overview of the application of sentiment analysis (SA) in exploring social movements (SMs). It also compares different models for a SA task of Algerian…
Abstract
Purpose
This study provides an overview of the application of sentiment analysis (SA) in exploring social movements (SMs). It also compares different models for a SA task of Algerian Arabic tweets related to early days of the Algerian SM, called Hirak.
Design/methodology/approach
Related tweets were retrieved using relevant hashtags followed by multiple data cleaning procedures. Foundational machine learning methods such as Naive Bayes, Support Vector Machine, Logistic Regression (LR) and Decision Tree were implemented. For each classifier, two feature extraction techniques were used and compared, namely Bag of Words and Term Frequency–Inverse Document Frequency. Moreover, three fine-tuned pretrained transformers AraBERT and DziriBERT and the multilingual transformer XLM-R were used for the comparison.
Findings
The findings of this paper emphasize the vital role social media played during the Hirak. Results revealed that most individuals had a positive attitude toward the Hirak. Moreover, the presented experiments provided important insights into the possible use of both basic machine learning and transfer learning models to analyze SA of Algerian text datasets. When comparing machine learning models with transformers in terms of accuracy, precision, recall and F1-score, the results are fairly similar, with LR outperforming all models with a 68 per cent accuracy rate.
Originality/value
At the time of writing, the Algerian SM was not thoroughly investigated or discussed in the Computer Science literature. This analysis makes a limited but unique contribution to understanding the Algerian Hirak using artificial intelligence. This study proposes what it considers to be a unique basis for comprehending this event with the goal of generating a foundation for future studies by comparing different SA techniques on a low-resource language.
Details
Keywords
Ghazzali N. Nadanveettil, Ibnu Noufal Kambitta Valappil, Hadungshar Swargiary and R. Sevukan
This study aims to present scientometric mapping and altmetric analysis of publications related to “Hockey” in the past three decades. By using the advanced analytical techniques…
Abstract
Purpose
This study aims to present scientometric mapping and altmetric analysis of publications related to “Hockey” in the past three decades. By using the advanced analytical techniques of mapping coupled with altmetric analysis, this paper aims to reveal the complex network of collaborations, the dispersion of expertise worldwide and prevailing thematic trends in the field of hockey.
Design/methodology/approach
The data was extracted from the Web of Science (WoS) database and Altmetric Explorer for articles related to hockey over the past three decades. VOSviewer was used to conduct network analysis whereas MS-Excel was used for altmetric data analysis. The study focused on the articles retrieved using the key term “Hockey” in English language publications. The altmetric attention scores (AAS) were used to measure the level of online attention on different platforms, complementing traditional bibliometric analysis.
Findings
The study reveals a notable increase in the productivity of hockey research over the past 30 years, with a specific focus on major surges in publication output and altmetric attention in recent times. Coauthorship and country-wise mapping analysis highlight global research collaboration trends, while keyword analysis underscores thematic concentrations. Key journals such as British Journal of Sports Medicine and American Journal of Sports Medicine emerge as crucial dissemination platforms. The importance of X posts (Formerly Twitter) and Mendeley in the diffusion of hockey literature is highlighted by altmetric research.
Originality/value
The study provides a concise overview of research conducted on the game of hockey. This research will be advantageous for researchers and individuals involved in the hockey community, as it offers bibliographic insights and aids in identifying suitable media for disseminating their findings.
Details
Keywords
Owing to the worldwide outbreak of the SARS-CoV-2, social media conversations have increased. Given the increasing pressure from regulatory authorities and society, green…
Abstract
Purpose
Owing to the worldwide outbreak of the SARS-CoV-2, social media conversations have increased. Given the increasing pressure from regulatory authorities and society, green accounting – as a dimension of sustainable development – remains the most discussed topic on most social media platforms. This study aims to incorporate a technological approach to green accounting and sustainability to enhance the innovation process inside and outside organizations.
Design/methodology/approach
This study uses the hermeneutic phenomenological technique to investigate Twitter content. Tweets were subjected to a manual coding process to analyze their content, including recent advancements, challenges, cross-country initiatives and promotion strategies in green accounting. Public perception of green accounting and the COP26 climate summit was also studied.
Findings
Tweeters view green accounting favorably; however, they are apprehensive about its implementation. Regarding the challenges in green accounting, “corporate green washing” was the most tweeted content. The UK was the top-rated nation with respect to green accounting development. Furthermore, the most discussed breakthrough was the application of artificial intelligence in the domain of green accounting functions. However, Twitter users were observed to have directed heavy criticism at the COP26 climate summit in Glasgow.
Originality/value
This study’s primary innovation is its integration of emerging technologies such as machine learning and data mining with social media platforms such as Twitter. Incorporating manual coding of tweets is a rigorous procedure that amplifies the strength of machine learning software’s auto-coding feature.
Details
Keywords
Under the background of open science, this paper integrates altmetrics data and combines multiple evaluation methods to analyze and evaluate the indicators' characteristics of…
Abstract
Purpose
Under the background of open science, this paper integrates altmetrics data and combines multiple evaluation methods to analyze and evaluate the indicators' characteristics of discourse leading for academic journals, which is of great significance to enrich and improve the evaluation theory and indicator system of academic journals.
Design/methodology/approach
This paper obtained 795,631 citations and 10.3 million altmetrics indicators data for 126,424 published papers from 151 medicine, general and internal academic journals. In this paper, descriptive statistical analysis and distribution rules of evaluation indicators are first carried out at the macro level. The distribution characteristics of evaluation indicators under different international collaboration conditions are analyzed at the micro level. Second, according to the characteristics and connotation of the evaluation indicators, the evaluation indicator system is constructed. Third, correlation analysis, factor analysis, entropy weight method and TOPSIS method are adopted to evaluate and analyze the discourse leading in medicine, general and internal academic journals by integrating altmetrics. At the same time, this paper verifies the reliability of the evaluation results.
Findings
Six features of discourse leading integrated with altmetrics indicators are obtained. In the era of open science, online academic exchanges are becoming more and more popular. The evaluation activities based on altmetrics have fine-grained and procedural advantages. It is feasible and necessary to integrate altmetrics indicators and combine the advantages of multiple methods to evaluate the academic journals' discourse leading of which are in a diversified academic ecosystem.
Originality/value
This paper uses descriptive statistical analysis to analyze the distribution characteristics and distribution rules of discourse leading indicators of academic journals and to explore the availability of altmetrics indicators and the effectiveness of constructing an evaluation system. Then, combining the advantages of multiple evaluation methods, The author integrates altmetrics indicators to comprehensively evaluate the discourse leading of academic journals and verify the reliability of the evaluation results. This paper aims to provide references for enriching and improving the evaluation theory and indicator system of academic journals.
Details
Keywords
Yu-Jung Cheng and Shu-Lai Chou
This study applies digital humanity tools (Gephi and Protégé) for establishing and visualizing ontologies in the cultural heritage domain. According to that, this study aims to…
Abstract
Purpose
This study applies digital humanity tools (Gephi and Protégé) for establishing and visualizing ontologies in the cultural heritage domain. According to that, this study aims to develop a novel evaluation approach using five ontology indicators (data overview, visual presentation, highlight links, scalability and querying) to evaluate the knowledge structure presentation of cultural heritage ontology.
Design/methodology/approach
The researchers collected and organized 824 pieces of government’s open data (GOD), converted GOD into the resource description framework format, applied Protégé and Gephi to establish and visualize cultural heritage ontology. After ontology is built, this study recruited 60 ontology participants (30 from information and communications technology background; 30 from cultural heritage background) to operate this ontology and gather their different perspectives of visual ontology.
Findings
Based on the ontology participant’s feedback, this study discovered that Gephi is more supporting than Protégé when visualizing ontology. Especially in data overview, visual presentation and highlight links dimensions, which is supported visualization and demonstrated ontology class hierarchy and property relation, facilitated the wider application of ontology.
Originality/value
This study offers two contributions. First, the researchers analyzed data on East Asian architecture with novel digital humanities tools to visualize ontology for cultural heritage. Second, the study collected participant’s feedback regarding the visualized ontology to enhance its design, which can serve as a reference for future ontological development.
Details
Keywords
Shubangini Patil and Rekha Patil
Until now, a lot of research has been done and applied to provide security and original data from one user to another, such as third-party auditing and several schemes for…
Abstract
Purpose
Until now, a lot of research has been done and applied to provide security and original data from one user to another, such as third-party auditing and several schemes for securing the data, such as the generation of the key with the help of encryption algorithms like Rivest–Shamir–Adleman and others. Here are some of the related works that have been done previously. Remote damage control resuscitation (RDCR) scheme by Yan et al. (2017) is proposed based on the minimum bandwidth. By enabling the third party to perform the verification of public integrity. Although it supports the repair management for the corrupt data and tries to recover the original data, in practicality it fails to do so, and thus it takes more computation and communication cost than our proposed system. In a paper by Chen et al. (2015), using broadcast encryption, an idea for cloud storage data sharing has been developed. This technique aims to accomplish both broadcast data and dynamic sharing, allowing users to join and leave a group without affecting the electronic press kit (EPK). In this case, the theoretical notion was true and new, but the system’s practicality and efficiency were not acceptable, and the system’s security was also jeopardised because it proposed adding a member without altering any keys. In this research, an identity-based encryption strategy for data sharing was investigated, as well as key management and metadata techniques to improve model security (Jiang and Guo, 2017). The forward and reverse ciphertext security is supplied here. However, it is more difficult to put into practice, and one of its limitations is that it can only be used for very large amounts of cloud storage. Here, it extends support for dynamic data modification by batch auditing. The important feature of the secure and efficient privacy preserving provable data possession in cloud storage scheme was to support every important feature which includes data dynamics, privacy preservation, batch auditing and blockers verification for an untrusted and an outsourced storage model (Pathare and Chouragadec, 2017). A homomorphic signature mechanism was devised to prevent the usage of the public key certificate, which was based on the new id. This signature system was shown to be resistant to the id attack on the random oracle model and the assault of forged message (Nayak and Tripathy, 2018; Lin et al., 2017). When storing data in a public cloud, one issue is that the data owner must give an enormous number of keys to the users in order for them to access the files. At this place, the knowledge assisted software engineering (KASE) plan was publicly unveiled for the first time. While sharing a huge number of documents, the data owner simply has to supply the specific key to the user, and the user only needs to provide the single trapdoor. Although the concept is innovative, the KASE technique does not apply to the increasingly common manufactured cloud. Cui et al. (2016) claim that as the amount of data grows, distribution management system (DMS) will be unable to handle it. As a result, various proven data possession (PDP) schemes have been developed, and practically all data lacks security. So, here in these certificates, PDP was introduced, which was based on bilinear pairing. Because of its feature of being robust as well as efficient, this is mostly applicable in DMS. The main purpose of this research is to design and implement a secure cloud infrastructure for sharing group data. This research provides an efficient and secure protocol for multiple user data in the cloud, allowing many users to easily share data.
Design/methodology/approach
The methodology and contribution of this paper is given as follows. The major goal of this study is to design and implement a secure cloud infrastructure for sharing group data. This study provides an efficient and secure protocol for multiple user data in cloud, allowing several users to share data without difficulty. The primary purpose of this research is to design and implement a secure cloud infrastructure for sharing group data. This research develops an efficient and secure protocol for multiple user data in the cloud, allowing numerous users to exchange data without difficulty. Selection scheme design (SSD) comprises two algorithms; first algorithm is designed for limited users and algorithm 2 is redesigned for the multiple users. Further, the authors design SSD-security protocol which comprises a three-phase model, namely, Phase 1, Phase 2 and Phase 3. Phase 1 generates the parameters and distributes the private key, the second phase generates the general key for all the users that are available and third phase is designed to prevent the dishonest user to entertain in data sharing.
Findings
Data sharing in cloud computing provides unlimited computational resources and storage to enterprise and individuals; moreover, cloud computing leads to several privacy and security concerns such as fault tolerance, reliability, confidentiality and data integrity. Furthermore, the key consensus mechanism is fundamental cryptographic primitive for secure communication; moreover, motivated by this phenomenon, the authors developed SSDmechanismwhich embraces the multiple users in the data-sharing model.
Originality/value
Files shared in the cloud should be encrypted for security purpose; later these files are decrypted for the users to access the file. Furthermore, the key consensus process is a crucial cryptographic primitive for secure communication; additionally, the authors devised the SSD mechanism, which incorporates numerous users in the data-sharing model, as a result of this phenomena. For evaluation of the SSD method, the authors have considered the ideal environment of the system, that is, the authors have used java as a programming language and eclipse as the integrated drive electronics tool for the proposed model evaluation. Hardware configuration of the model is such that it is packed with 4 GB RAM and i7 processor, the authors have used the PBC library for the pairing operations (PBC Library, 2022). Furthermore, in the following section of this paper, the number of users is varied to compare with the existing methodology RDIC (Li et al., 2020). For the purposes of the SSD-security protocol, a prime number is chosen as the number of users in this work.
Details
Keywords
Meriam Trabelsi, Elena Casprini, Niccolò Fiorini and Lorenzo Zanni
This study analyses the literature on artificial intelligence (AI) and its implications for the agri-food sector. This research aims to identify the current research streams, main…
Abstract
Purpose
This study analyses the literature on artificial intelligence (AI) and its implications for the agri-food sector. This research aims to identify the current research streams, main methodologies used, findings and results delivered, gaps and future research directions.
Design/methodology/approach
This study relies on 69 published contributions in the field of AI in the agri-food sector. It begins with a bibliographic coupling to map and identify the current research streams and proceeds with a systematic literature review to examine the main topics and examine the main contributions.
Findings
Six clusters were identified: (1) AI adoption and benefits, (2) AI for efficiency and productivity, (3) AI for logistics and supply chain management, (4) AI for supporting decision making process for firms and consumers, (5) AI for risk mitigation and (6) AI marketing aspects. Then, the authors propose an interpretive framework composed of three main dimensions: (1) the two sides of AI: the “hard” side concerns the technology development and application while the “soft” side regards stakeholders' acceptance of the latter; (2) level of analysis: firm and inter-firm; (3) the impact of AI on value chain activities in the agri-food sector.
Originality/value
This study provides interpretive insights into the extant literature on AI in the agri-food sector, paving the way for future research and inspiring practitioners of different AI approaches in a traditionally low-tech sector.
Details
Keywords
Rekha Yoganathan, Jamuna Venkatesan and William Christopher I.
This paper intent to design, develop, and fabricate a robust cascaded controller based on the dual loop concept i.e. Fuzzy Sliding Mode concept in the inner loop and traditional…
Abstract
Purpose
This paper intent to design, develop, and fabricate a robust cascaded controller based on the dual loop concept i.e. Fuzzy Sliding Mode concept in the inner loop and traditional Proportional Integral controller in the outer loop to reduce the unknown dynamics and disturbances that occur in the DC-DC Converter.
Design/methodology/approach
The proposed Fuzzy sliding mode approach combines the merits of both SMC and Fuzzy logic control. FSMC approach reduces the chattering phenomena that commonly occurs in the sliding mode control and speed up the response of the controller.
Findings
In most of the research work, the inner current loop of cascaded controller was designed by sliding mode control. In this paper FSMC is proposed and its efficacy is confirmed with SMC -PI. In most uncertainties, FSMC-PI produces null maximum peak overshoot and a very less settling time of 0.0005 sec.
Originality/value
The presence of Fuzzy SMC in the inner loop ensure satisfactory response against all uncertainties such as steady state, circuit parameter variations and sudden line and load disturbances.
Details
Keywords
Rajneesh Kumar, Aseem Miglani and Rekha Rani
The purpose of this paper is to study the axisymmetric problem in a micropolar porous thermoelastic circular plate with dual phase lag model by employing eigenvalue approach…
Abstract
Purpose
The purpose of this paper is to study the axisymmetric problem in a micropolar porous thermoelastic circular plate with dual phase lag model by employing eigenvalue approach subjected to thermomechanical sources.
Design/methodology/approach
The Laplace and Hankel transforms are employed to obtain the expressions for displacements, microrotation, volume fraction field, temperature distribution and stresses in the transformed domain. A numerical inversion technique has been carried out to obtain the resulting quantities in the physical domain. Effect of porosity and phase lag on the resulting quantities has been presented graphically. The results obtained for Lord Shulman theory (L-S, 1967) and coupled theory of thermoelasticity are presented as the particular cases.
Findings
The variation of temperature distribution is similar for micropolar thermoelastic with dual (MTD) phase lag model and coupled theory of thermoelasticity. The variation is also similar for tangential couple stress for MTD and L-S theory but opposite to couple theory. The behavior of volume fraction field and tangential couple stress for L-S theory and coupled theory are observed opposite. The values of all the resulting quantities are close to each other away from the sources. The variation in tangential stress, tangential couple stress and temperature distribution is more uniform.
Originality/value
The results are original and new because the authors presented an eigenvalue approach for two dimensional problem of micropolar porous thermoelastic circular plate with dual phase lag model. A comparison of porosity, L-S theory and coupled theory of micropolar thermoelasticity is made. Such problem has applications in material science, industries and earthquake problems.
Details