Search results

1 – 10 of 10
Open Access
Article
Publication date: 24 October 2021

Piergiorgio Alotto, Paolo Di Barba, Alessandro Formisano, Gabriele Maria Lozito, Raffaele Martone, Maria Evelina Mognaschi, Maurizio Repetto, Alessandro Salvini and Antonio Savini

Inverse problems in electromagnetism, namely, the recovery of sources (currents or charges) or system data from measured effects, are usually ill-posed or, in the numerical…

Abstract

Purpose

Inverse problems in electromagnetism, namely, the recovery of sources (currents or charges) or system data from measured effects, are usually ill-posed or, in the numerical formulation, ill-conditioned and require suitable regularization to provide meaningful results. To test new regularization methods, there is the need of benchmark problems, which numerical properties and solutions should be well known. Hence, this study aims to define a benchmark problem, suitable to test new regularization approaches and solves with different methods.

Design/methodology/approach

To assess reliability and performance of different solving strategies for inverse source problems, a benchmark problem of current synthesis is defined and solved by means of several regularization methods in a comparative way; subsequently, an approach in terms of an artificial neural network (ANN) is considered as a viable alternative to classical regularization schemes. The solution of the underlying forward problem is based on a finite element analysis.

Findings

The paper provides a very detailed analysis of the proposed inverse problem in terms of numerical properties of the lead field matrix. The solutions found by different regularization approaches and an ANN method are provided, showing the performance of the applied methods and the numerical issues of the benchmark problem.

Originality/value

The value of the paper is to provide the numerical characteristics and issues of the proposed benchmark problem in a comprehensive way, by means of a wide variety of regularization methods and an ANN approach.

Details

COMPEL - The international journal for computation and mathematics in electrical and electronic engineering , vol. 40 no. 6
Type: Research Article
ISSN: 0332-1649

Keywords

Open Access
Article
Publication date: 19 September 2019

Mario Schenk, Annette Muetze, Klaus Krischan and Christian Magele

The purpose of this paper is to evaluate the worst-case behavior of a given electronic circuit by varying the values of the components in a meaningful way in order not to exceed…

1812

Abstract

Purpose

The purpose of this paper is to evaluate the worst-case behavior of a given electronic circuit by varying the values of the components in a meaningful way in order not to exceed pre-defined currents or voltages limits during a transient operation.

Design/methodology/approach

An analytic formulation is used to identify the time-dependent solution of voltages or currents using proper state equations in closed form. Circuits with linear elements can be described by a system of differential equations, while circuits composing nonlinear elements are described by piecewise-linear models. A sequential quadratic program (SQP) is used to find the worst-case scenario.

Findings

It is found that the worst-case scenario can be obtained with as few solutions to the forward problem as possible by applying an SQP method.

Originality/value

The SQP method in combination with the analytic forward solver approach shows that the worst-case limit converges in a few steps even if the worst-case limit is not on the boundary of the parameters.

Details

COMPEL - The international journal for computation and mathematics in electrical and electronic engineering , vol. 38 no. 5
Type: Research Article
ISSN: 0332-1649

Keywords

Open Access
Article
Publication date: 6 July 2020

Carlo Ricciardi, Alfonso Sorrentino, Giovanni Improta, Vincenzo Abbate, Imma Latessa, Antonietta Perrone, Maria Triassi and Giovanni Dell'aversana Orabona

Head and neck cancers are multi-factorial diseases that can affect many sides of people's life and are due to a lot of risk factors. According to their characteristics, the…

1417

Abstract

Purpose

Head and neck cancers are multi-factorial diseases that can affect many sides of people's life and are due to a lot of risk factors. According to their characteristics, the treatment can be surgical, use of radiation or chemotherapy. The use of a surgical treatment can lead to surgical infections that are a main theme in medicine. At the University hospital of Naples “Federico II”, two antibiotics were employed to tackle the issue of the infections and they are compared in this paper to find which one implies the lowest length of hospital stay (LOS) and the reduction of infections.

Design/methodology/approach

The Six Sigma methodology and its problem-solving strategy DMAIC (define, measure, analyse, improve, control), already employed in the healthcare sector, were used as a tool of a health technology assessment between two drugs. In this paper the DMAIC roadmap is used to compare the Ceftriaxone (administered to a group of 48 patients) and the association of Cefazolin plus Clindamycin (administered to a group of 45 patients).

Findings

The results show that the LOS of patients treated with Ceftriaxone is lower than those who were treated with the association of Cefazolin plus Clindamycin, the difference is about 41%. Moreover, a lower number of complications and infections was found in patients who received Ceftriaxone. Finally, a greater number of antibiotic shifts was needed by patients treated with Cefazolin plus Clindamycin.

Research limitations/implications

While the paper enhances clearly the advantages for patients' outcomes regarding the LOS and the number of complications, it did not analyse the costs of the two antibiotics.

Practical implications

Employing the Ceftriaxone would allow the Department of Maxillofacial Surgery to obtain lower LOS and a limited number of complications/infections for recovered patients, consequently reducing the hospitalization costs.

Originality/value

There is a double value in this paper: first of all, the comparison between the two antibiotics gives an answer to one of the main issues in medicine that is the reduction of hospital-acquired infections; secondly, the Six Sigma through its DMAIC cycle can be employed also to compare two biomedical technologies as a tool of health technology assessment studies.

Details

The TQM Journal, vol. 32 no. 6
Type: Research Article
ISSN: 1754-2731

Keywords

Open Access
Article
Publication date: 31 October 2023

Neema Florence Mosha and Patrick Ngulube

The study aims to investigate the utilisation of open research data repositories (RDRs) for storing and sharing research data in higher learning institutions (HLIs) in Tanzania.

Abstract

Purpose

The study aims to investigate the utilisation of open research data repositories (RDRs) for storing and sharing research data in higher learning institutions (HLIs) in Tanzania.

Design/methodology/approach

A survey research design was employed to collect data from postgraduate students at the Nelson Mandela African Institution of Science and Technology (NM-AIST) in Arusha, Tanzania. The data were collected and analysed quantitatively and qualitatively. A census sampling technique was employed to select the sample size for this study. The quantitative data were analysed using the Statistical Package for the Social Sciences (SPSS), whilst the qualitative data were analysed thematically.

Findings

Less than half of the respondents were aware of and were using open RDRs, including Zenodo, DataVerse, Dryad, OMERO, GitHub and Mendeley data repositories. More than half of the respondents were not willing to share research data and cited a lack of ownership after storing their research data in most of the open RDRs and data security. HILs need to conduct training on using trusted repositories and motivate postgraduate students to utilise open repositories (ORs). The challenges for underutilisation of open RDRs were a lack of policies governing the storage and sharing of research data and grant constraints.

Originality/value

Research data storage and sharing are of great interest to researchers in HILs to inform them to implement open RDRs to support these researchers. Open RDRs increase visibility within HILs and reduce research data loss, and research works will be cited and used publicly. This paper identifies the potential for additional studies focussed on this area.

Open Access
Article
Publication date: 27 September 2021

Yamini Meduri

This study aims to explain the importance of human resources and attempts to identify the competencies required by the personnel involved in disaster management operations.

1275

Abstract

Purpose

This study aims to explain the importance of human resources and attempts to identify the competencies required by the personnel involved in disaster management operations.

Design/methodology/approach

The research uses a qualitative methodology to explore the competencies required by the relief workers using a content analysis approach to analyze the disaster literature and the job advertisements. The data for the content analysis was developed with the help of 23 independent coders, and exploratory inferences were drawn.

Findings

A detailed review of the literature highlighted the importance of competent personnel in disaster relief organizations. The analysis listed 34 mutually exclusive competencies and their relative importance, which were further divided into four competency clusters. The study also creates a competency dictionary that defines the competencies with the expected behaviors.

Practical implications

Deploying the right resources in the acute time frame during a disaster event can make a difference, and with lives at stake, such deployment acquires prime importance. In addition to contributing to humanitarian logistics literature, the competency model developed will also help forecast the future requirements and help the organization choose “the right person for the right job.”

Originality/value

The inferences drawn in the study are based on disaster management areas, unlike earlier research which also considered business logistics research.

Details

RAUSP Management Journal, vol. 56 no. 4
Type: Research Article
ISSN: 2531-0488

Keywords

Open Access
Article
Publication date: 29 July 2020

Kai Nishikawa

The purpose of this paper is to survey how research data are governed at repositories in Japan by deductively establishing a governance typology based on the concept of openness…

3330

Abstract

Purpose

The purpose of this paper is to survey how research data are governed at repositories in Japan by deductively establishing a governance typology based on the concept of openness in the context of knowledge commons and empirically assessing the conformity of repositories to each type.

Design/methodology/approach

The fuzzy-set ideal type analysis (FSITA) was adopted. For data collection, a manual assessment was conducted with all Japanese research data repositories registered on re3data.org.

Findings

The typology constructed in this paper consists of three dimensions: openness to resources (here equal to research data), openness to a community and openness to infrastructure provision. This paper found that there is no case where all dimensions are open, and there are several cases where the resources are closed despite research data repositories being positioned as a basis for open science in Japanese science and technology policy.

Originality/value

This is likely the first construction of the typology and application of FSITA to the study of research data governance based on knowledge commons. The findings of this paper provide practitioners insight into how to govern research data at repositories. The typology serves as a first step for future research on knowledge commons, for example, as a criterion of case selection in conducting in-depth case studies.

Details

Aslib Journal of Information Management, vol. 72 no. 5
Type: Research Article
ISSN: 2050-3806

Keywords

Open Access
Article
Publication date: 9 February 2022

Anja Perry and Sebastian Netscher

Budgeting data curation tasks in research projects is difficult. In this paper, we investigate the time spent on data curation, more specifically on cleaning and documenting…

2615

Abstract

Purpose

Budgeting data curation tasks in research projects is difficult. In this paper, we investigate the time spent on data curation, more specifically on cleaning and documenting quantitative data for data sharing. We develop recommendations on cost factors in research data management.

Design/methodology/approach

We make use of a pilot study conducted at the GESIS Data Archive for the Social Sciences in Germany between December 2016 and September 2017. During this period, data curators at GESIS - Leibniz Institute for the Social Sciences documented their working hours while cleaning and documenting data from ten quantitative survey studies. We analyse recorded times and discuss with the data curators involved in this work to identify and examine important cost factors in data curation, that is aspects that increase hours spent and factors that lead to a reduction of their work.

Findings

We identify two major drivers of time spent on data curation: The size of the data and personal information contained in the data. Learning effects can occur when data are similar, that is when they contain same variables. Important interdependencies exist between individual tasks in data curation and in connection with certain data characteristics.

Originality/value

The different tasks of data curation, time spent on them and interdependencies between individual steps in curation have so far not been analysed.

Details

Journal of Documentation, vol. 78 no. 7
Type: Research Article
ISSN: 0022-0418

Keywords

Open Access
Article
Publication date: 30 March 2023

Sofia Baroncini, Bruno Sartini, Marieke Van Erp, Francesca Tomasi and Aldo Gangemi

In the last few years, the size of Linked Open Data (LOD) describing artworks, in general or domain-specific Knowledge Graphs (KGs), is gradually increasing. This provides…

Abstract

Purpose

In the last few years, the size of Linked Open Data (LOD) describing artworks, in general or domain-specific Knowledge Graphs (KGs), is gradually increasing. This provides (art-)historians and Cultural Heritage professionals with a wealth of information to explore. Specifically, structured data about iconographical and iconological (icon) aspects, i.e. information about the subjects, concepts and meanings of artworks, are extremely valuable for the state-of-the-art of computational tools, e.g. content recognition through computer vision. Nevertheless, a data quality evaluation for art domains, fundamental for data reuse, is still missing. The purpose of this study is filling this gap with an overview of art-historical data quality in current KGs with a focus on the icon aspects.

Design/methodology/approach

This study’s analyses are based on established KG evaluation methodologies, adapted to the domain by addressing requirements from art historians’ theories. The authors first select several KGs according to Semantic Web principles. Then, the authors evaluate (1) their structures’ suitability to describe icon information through quantitative and qualitative assessment and (2) their content, qualitatively assessed in terms of correctness and completeness.

Findings

This study’s results reveal several issues on the current expression of icon information in KGs. The content evaluation shows that these domain-specific statements are generally correct but often not complete. The incompleteness is confirmed by the structure evaluation, which highlights the unsuitability of the KG schemas to describe icon information with the required granularity.

Originality/value

The main contribution of this work is an overview of the actual landscape of the icon information expressed in LOD. Therefore, it is valuable to cultural institutions by providing them a first domain-specific data quality evaluation. Since this study’s results suggest that the selected domain information is underrepresented in Semantic Web datasets, the authors highlight the need for the creation and fostering of such information to provide a more thorough art-historical dimension to LOD.

Details

Journal of Documentation, vol. 79 no. 7
Type: Research Article
ISSN: 0022-0418

Keywords

Open Access
Article
Publication date: 12 December 2022

Bianca Gualandi, Luca Pareschi and Silvio Peroni

This article describes the interviews the authors conducted in late 2021 with 19 researchers at the Department of Classical Philology and Italian Studies at the University of…

2209

Abstract

Purpose

This article describes the interviews the authors conducted in late 2021 with 19 researchers at the Department of Classical Philology and Italian Studies at the University of Bologna. The main purpose was to shed light on the definition of the word “data” in the humanities domain, as far as FAIR data management practices are concerned, and on what researchers think of the term.

Design/methodology/approach

The authors invited one researcher for each of the official disciplinary areas represented within the department and all 19 accepted to participate in the study. Participants were then divided into five main research areas: philology and literary criticism, language and linguistics, history of art, computer science and archival studies. The interviews were transcribed and analysed using a grounded theory approach.

Findings

A list of 13 research data types has been compiled thanks to the information collected from participants. The term “data” does not emerge as especially problematic, although a good deal of confusion remains. Looking at current research management practices, methodologies and teamwork appear more central than previously reported.

Originality/value

Our findings confirm that “data” within the FAIR framework should include all types of inputs and outputs humanities research work with, including publications. Also, the participants of this study appear ready for a discussion around making their research data FAIR: they do not find the terminology particularly problematic, while they rely on precise and recognised methodologies, as well as on sharing and collaboration with colleagues.

Details

Journal of Documentation, vol. 79 no. 7
Type: Research Article
ISSN: 0022-0418

Keywords

Open Access
Article
Publication date: 3 November 2022

Zainab Bintay Anis, Rashid Iqbal, Wahab Nazir and Nauman Khalid

The novel coronavirus (SARS-CoV-2) variant of 2019 has taken more than 3.8 million lives according to the World Health Organization. To stop the spread of such a deadly and…

2071

Abstract

Purpose

The novel coronavirus (SARS-CoV-2) variant of 2019 has taken more than 3.8 million lives according to the World Health Organization. To stop the spread of such a deadly and contagious disease, lockdown of varying nature was imposed worldwide. Lockdown, preventive techniques and observation of standard operating procedures (SOPs) have effectively decreased the spread of contagious diseases but have affected various businesses and industries economically. The food industry has been hit hard by different restriction parameters, due to which a disruption in food supply and demand was observed. Therefore, this study aims to study this disruption in the supply chain of processed food.

Design/methodology/approach

A comprehensive review was conducted on PubMed, Google Scholar, and Scopus to locate articles on processed foods, food delivery and supply chain. The selected articles were evaluated using the context analysis method.

Findings

The pandemic situation has increased the consumption and demand for processed food products from retail stores, and decreased the demand for food service products. These circumstances called for technological advancement in the field of food supply from farm to fork. This study reviews research articles, policies and secondary literature. Several advances have been made to deliver safe, nutritious and wholesome food to consumers. Block chain-based food supply chains, value stream mapping, sustainable supply chain domain and online ordering systems via mobile apps have been discussed in correspondence with information and communication technology (ICT) during COVID-19.

Research limitations/implications

This study concludes that the use of advanced software and its adequate knowledge by suppliers, logistics companies and consumers have assisted in handling shocks to the global food system and provided in-time food delivery, traceability, database information and securely processed food to consumers.

Originality/value

This study shows the effects of COVID-19 pandemic on global food systems; disruption in food demand and supply chain is overlooked and changed; use of technological advances in food supply chain to tackle pandemic; online food ordering system gained popularity and improved technically.

Highlights

  1. The review highlights the effects of the COVID-19 pandemic on global food systems.

  2. The disruption in food demand and supply chain is overlooked and changed.

  3. The use of technological advances in the food supply chain to tackle the pandemic.

  4. The online food ordering system gained popularity and improved technically.

The review highlights the effects of the COVID-19 pandemic on global food systems.

The disruption in food demand and supply chain is overlooked and changed.

The use of technological advances in the food supply chain to tackle the pandemic.

The online food ordering system gained popularity and improved technically.

Details

Arab Gulf Journal of Scientific Research, vol. 41 no. 2
Type: Research Article
ISSN: 1985-9899

Keywords

Access

Only content I have access to

Year

Content type

1 – 10 of 10