Search results

1 – 10 of over 12000
Article
Publication date: 30 November 2023

Moses Asori, Emmanuel Dogbey, Solomon Twum Ampofo and Julius Odei

Current evidence indicates that humans and animals are at increased risk of multiple health challenges due to microplastic (MP) profusion. However, mitigation is constrained by…

Abstract

Purpose

Current evidence indicates that humans and animals are at increased risk of multiple health challenges due to microplastic (MP) profusion. However, mitigation is constrained by inadequate scientific data, further aggravated by the lack of evidence in many African countries. This review therefore synthesized evidence on the current extent of MP pollution in Africa and the analytical techniques for reporting.

Design/methodology/approach

A literature search was undertaken in research databases. Medical subject headings (MeSH) terms and keywords were used in the literature search. The authors found 38 studies from 10 countries that met the inclusion criteria.

Findings

Marine organisms had MPs prevalence ranging from 19% to 100%, whereas sediments and water samples had between 77 and 100%. The most common and dominant polymers included polypropylene and polyethylene.

Practical implications

This review shows that most studies still use methods that are prone to human errors. Therefore, the concentration of MPs is likely underestimated, even though the authors’ prevalence evaluations show MPs are still largely pervasive across multiple environmental matrices. Also, the study reveals significant spatial disparity in MP research across the African continent, showing the need for further research in other African countries.

Originality/value

Even though some reviews have assessed MPs pollution in Africa, they have not evaluated sample prevalence, which is necessary to understand not only concentration but pervasiveness across the continent. Secondly, this study delves deeper into various methods of sampling, extraction and analysis of MPs, as well as limitations and relevant recommendations.

Details

Management of Environmental Quality: An International Journal, vol. 35 no. 3
Type: Research Article
ISSN: 1477-7835

Keywords

Article
Publication date: 30 October 2023

Qiangqiang Zhai, Zhao Liu, Zhouzhou Song and Ping Zhu

Kriging surrogate model has demonstrated a powerful ability to be applied to a variety of engineering challenges by emulating time-consuming simulations. However, when it comes to…

Abstract

Purpose

Kriging surrogate model has demonstrated a powerful ability to be applied to a variety of engineering challenges by emulating time-consuming simulations. However, when it comes to problems with high-dimensional input variables, it may be difficult to obtain a model with high accuracy and efficiency due to the curse of dimensionality. To meet this challenge, an improved high-dimensional Kriging modeling method based on maximal information coefficient (MIC) is developed in this work.

Design/methodology/approach

The hyperparameter domain is first derived and the dataset of hyperparameter and likelihood function is collected by Latin Hypercube Sampling. MIC values are innovatively calculated from the dataset and used as prior knowledge for optimizing hyperparameters. Then, an auxiliary parameter is introduced to establish the relationship between MIC values and hyperparameters. Next, the hyperparameters are obtained by transforming the optimized auxiliary parameter. Finally, to further improve the modeling accuracy, a novel local optimization step is performed to discover more suitable hyperparameters.

Findings

The proposed method is then applied to five representative mathematical functions with dimensions ranging from 20 to 100 and an engineering case with 30 design variables.

Originality/value

The results show that the proposed high-dimensional Kriging modeling method can obtain more accurate results than the other three methods, and it has an acceptable modeling efficiency. Moreover, the proposed method is also suitable for high-dimensional problems with limited sample points.

Details

Engineering Computations, vol. 40 no. 9/10
Type: Research Article
ISSN: 0264-4401

Keywords

Article
Publication date: 16 February 2024

Neeraj Joshi, Sudeep R. Bapat and Raghu Nandan Sengupta

The purpose of this paper is to develop optimal estimation procedures for the stress-strength reliability (SSR) parameter R = P(X > Y) of an inverse Pareto distribution (IPD).

Abstract

Purpose

The purpose of this paper is to develop optimal estimation procedures for the stress-strength reliability (SSR) parameter R = P(X > Y) of an inverse Pareto distribution (IPD).

Design/methodology/approach

We estimate the SSR parameter R = P(X > Y) of the IPD under the minimum risk and bounded risk point estimation problems, where X and Y are strength and stress variables, respectively. The total loss function considered is a combination of estimation error (squared error) and cost, utilizing which we minimize the associated risk in order to estimate the reliability parameter. As no fixed-sample technique can be used to solve the proposed point estimation problems, we propose some “cost and time efficient” adaptive sampling techniques (two-stage and purely sequential sampling methods) to tackle them.

Findings

We state important results based on the proposed sampling methodologies. These include estimations of the expected sample size, standard deviation (SD) and mean square error (MSE) of the terminal estimator of reliability parameters. The theoretical values of reliability parameters and the associated sample size and risk functions are well supported by exhaustive simulation analyses. The applicability of our suggested methodology is further corroborated by a real dataset based on insurance claims.

Originality/value

This study will be useful for scenarios where various logistical concerns are involved in the reliability analysis. The methodologies proposed in this study can reduce the number of sampling operations substantially and save time and cost to a great extent.

Details

International Journal of Quality & Reliability Management, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0265-671X

Keywords

Content available
Article
Publication date: 23 October 2023

Adam Biggs and Joseph Hamilton

Evaluating warfighter lethality is a critical aspect of military performance. Raw metrics such as marksmanship speed and accuracy can provide some insight, yet interpreting subtle…

Abstract

Purpose

Evaluating warfighter lethality is a critical aspect of military performance. Raw metrics such as marksmanship speed and accuracy can provide some insight, yet interpreting subtle differences can be challenging. For example, is a speed difference of 300 milliseconds more important than a 10% accuracy difference on the same drill? Marksmanship evaluations must have objective methods to differentiate between critical factors while maintaining a holistic view of human performance.

Design/methodology/approach

Monte Carlo simulations are one method to circumvent speed/accuracy trade-offs within marksmanship evaluations. They can accommodate both speed and accuracy implications simultaneously without needing to hold one constant for the sake of the other. Moreover, Monte Carlo simulations can incorporate variability as a key element of performance. This approach thus allows analysts to determine consistency of performance expectations when projecting future outcomes.

Findings

The review divides outcomes into both theoretical overview and practical implication sections. Each aspect of the Monte Carlo simulation can be addressed separately, reviewed and then incorporated as a potential component of small arms combat modeling. This application allows for new human performance practitioners to more quickly adopt the method for different applications.

Originality/value

Performance implications are often presented as inferential statistics. By using the Monte Carlo simulations, practitioners can present outcomes in terms of lethality. This method should help convey the impact of any marksmanship evaluation to senior leadership better than current inferential statistics, such as effect size measures.

Details

Journal of Defense Analytics and Logistics, vol. 7 no. 2
Type: Research Article
ISSN: 2399-6439

Keywords

Article
Publication date: 6 February 2024

Sanjay Dhingra and Abhishek

This study aims to explore and conceptualize metaverse adoption using a systematic literature review (SLR). It also aims to propose a conceptual model that identifies significant…

Abstract

Purpose

This study aims to explore and conceptualize metaverse adoption using a systematic literature review (SLR). It also aims to propose a conceptual model that identifies significant factors affecting metaverse adoption in the entertainment, education, tourism and health sectors.

Design/methodology/approach

A SLR was conducted using the “preferred reporting items for systematic reviews and meta-analyses” report protocol and the “theory, context, characteristics, methods” framework to include all relevant articles published up to March 2023, which were sourced from the Scopus and Web of Science databases.

Findings

The reviewed literature revealed that the countries with the highest publications in the field of metaverse were China and the USA. It was also found that the technology acceptance model was the most used theoretical framework. Survey-based research using purposive and convenience sampling techniques emerged as the predominant method for data collection, and partial least square-structural equation modeling was the most used analytical technique. The review also identified the top six journals and the variables that help to develop a proposed model.

Originality/value

This review presents a novel contribution to the literature on metaverse adoption by forming a conceptual model that incorporates the most used variables in the entertainment, education, tourism and health sectors. The possible directions for future research with identified research gaps were also discussed.

Details

Global Knowledge, Memory and Communication, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 2514-9342

Keywords

Article
Publication date: 12 September 2023

Roberto Falcão, Eduardo Cruz, Murilo Costa Filho and Maria Elo

The purpose of this paper is to discuss the issues in studying hard-to-reach or dispersed populations, with particular focus on methodologies used to collect data and to…

Abstract

Purpose

The purpose of this paper is to discuss the issues in studying hard-to-reach or dispersed populations, with particular focus on methodologies used to collect data and to investigate dispersed migrant entrepreneurs, illustrating shortcomings, pitfalls and potentials of accessing and disseminating research to hard-to-reach populations of migrant entrepreneurs.

Design/methodology/approach

A mixed methodology is proposed to access hard-to-reach or dispersed populations, and this paper explores these using a sample of Brazilian migrants settled in different countries of the world.

Findings

This paper explores empirical challenges, illustrating shortcomings, pitfalls and potentials of accessing and disseminating research to hard-to-reach populations of migrant entrepreneurs. It provides insights by reporting research experiences developed over time by this group of researchers, reflecting a “mixing” of methods for accessing respondents, contrasting to a more rigid, a-priori, mixed methods approach.

Originality/value

The main contribution of this paper is to showcase experiences from, and suitability of, remote data collection, especially for projects that cannot accommodate the physical participation of researchers, either because of time or cost constraints. It reports on researching migrant entrepreneurship overseas. Remote digital tools and online data collection are highly relevant due to time- and cost-efficiency, but also represent solutions for researching dispersed populations. These approaches presented allow for overcoming several barriers to data collection and present instrumental characteristics for migrant research.

Details

International Journal of Sociology and Social Policy, vol. 44 no. 1/2
Type: Research Article
ISSN: 0144-333X

Keywords

Article
Publication date: 1 April 2024

Tao Pang, Wenwen Xiao, Yilin Liu, Tao Wang, Jie Liu and Mingke Gao

This paper aims to study the agent learning from expert demonstration data while incorporating reinforcement learning (RL), which enables the agent to break through the…

Abstract

Purpose

This paper aims to study the agent learning from expert demonstration data while incorporating reinforcement learning (RL), which enables the agent to break through the limitations of expert demonstration data and reduces the dimensionality of the agent’s exploration space to speed up the training convergence rate.

Design/methodology/approach

Firstly, the decay weight function is set in the objective function of the agent’s training to combine both types of methods, and both RL and imitation learning (IL) are considered to guide the agent's behavior when updating the policy. Second, this study designs a coupling utilization method between the demonstration trajectory and the training experience, so that samples from both aspects can be combined during the agent’s learning process, and the utilization rate of the data and the agent’s learning speed can be improved.

Findings

The method is superior to other algorithms in terms of convergence speed and decision stability, avoiding training from scratch for reward values, and breaking through the restrictions brought by demonstration data.

Originality/value

The agent can adapt to dynamic scenes through exploration and trial-and-error mechanisms based on the experience of demonstrating trajectories. The demonstration data set used in IL and the experience samples obtained in the process of RL are coupled and used to improve the data utilization efficiency and the generalization ability of the agent.

Details

International Journal of Web Information Systems, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 1744-0084

Keywords

Article
Publication date: 15 August 2023

Chunping Zhou, Zheng Wei, Huajin Lei, Fangyun Ma and Wei Li

Surrogate models are extensively used to substitute real models which are expensive to evaluate in the time-dependent reliability analysis. Normally, different surrogate models…

Abstract

Purpose

Surrogate models are extensively used to substitute real models which are expensive to evaluate in the time-dependent reliability analysis. Normally, different surrogate models have different scopes of application. However, information is often insufficient for analysts to select the most appropriate surrogate model for a specific application. Thus, the result precited by individual surrogate model tends to be suboptimal or even inaccurate. Ensemble model can effectively deal with the above concern. This work aims to study the application of ensemble model for reliability analysis of time-independent problems.

Design/methodology/approach

In this work, a method of reliability analysis for time-dependent problems based on ensemble learning of surrogate models is developed. The ensemble of surrogate models includes Kriging, radial basis function, and support vector machine. The prediction is approximated by the weighted average model. The ensemble learning of surrogate models is updated by finding and adding the sample points with large prediction errors throughout the entire procedure.

Findings

The effectiveness of the proposed method is verified by several examples. The results show that the ensemble of surrogate models can effectively propagate the uncertainty of time-varying problems, and evaluate the reliability with high prediction accuracy and computational efficiency.

Originality/value

This work proposes an adaptive learning framework for the uncertainty propagation of time-dependent problems based on the ensemble of surrogate models. Compared with individual surrogate models, the ensemble model not only saves the effort of selecting an appropriate surrogate model especially when the knowledge of unknown problem is lacking, but also improves the prediction accuracy and computational efficiency.

Details

Multidiscipline Modeling in Materials and Structures, vol. 19 no. 6
Type: Research Article
ISSN: 1573-6105

Keywords

Article
Publication date: 28 December 2023

Weixin Zhang, Zhao Liu, Yu Song, Yixuan Lu and Zhenping Feng

To improve the speed and accuracy of turbine blade film cooling design process, the most advanced deep learning models were introduced into this study to investigate the most…

Abstract

Purpose

To improve the speed and accuracy of turbine blade film cooling design process, the most advanced deep learning models were introduced into this study to investigate the most suitable define for prediction work. This paper aims to create a generative surrogate model that can be applied on multi-objective optimization problems.

Design/methodology/approach

The latest backbone in the field of computer vision (Swin-Transformer, 2021) was introduced and improved as the surrogate function for prediction of the multi-physics field distribution (film cooling effectiveness, pressure, density and velocity). The basic samples were generated by Latin hypercube sampling method and the numerical method adopt for the calculation was validated experimentally at first. The training and testing samples were calculated at experimental conditions. At last, the surrogate model predicted results were verified by experiment in a linear cascade.

Findings

The results indicated that comparing with the Multi-Scale Pix2Pix Model, the Swin-Transformer U-Net model presented higher accuracy and computing speed on the prediction of contour results. The computation time for each step of the Swin-Transformer U-Net model is one-third of the original model, especially in the case of multi-physics field prediction. The correlation index reached more than 99.2% and the first-order error was lower than 0.3% for multi-physics field. The predictions of the data-driven surrogate model are consistent with the predictions of the computational fluid dynamics results, and both are very close to the experimental results. The application of the Swin-Transformer model on enlarging the different structure samples will reduce the cost of numerical calculations as well as experiments.

Research limitations/implications

The number of U-Net layers and sample scales has a proper relationship according to equation (8). Too many layers of U-Net will lead to unnecessary nonlinear variation, whereas too few layers will lead to insufficient feature extraction. In the case of Swin-Transformer U-Net model, incorrect number of U-Net layer will reduce the prediction accuracy. The multi-scale Pix2Pix model owns higher accuracy in predicting a single physical field, but the calculation speed is too slow. The Swin-Transformer model is fast in prediction and training (nearly three times faster than multi Pix2Pix model), but the predicted contours have more noise. The neural network predicted results and numerical calculations are consistent with the experimental distribution.

Originality/value

This paper creates a generative surrogate model that can be applied on multi-objective optimization problems. The generative adversarial networks using new backbone is chosen to adjust the output from single contour to multi-physics fields, which will generate more results simultaneously than traditional surrogate models and reduce the time-cost. And it is more applicable to multi-objective spatial optimization algorithms. The Swin-Transformer surrogate model is three times faster to computation speed than the Multi Pix2Pix model. In the prediction results of multi-physics fields, the prediction results of the Swin-Transformer model are more accurate.

Details

International Journal of Numerical Methods for Heat & Fluid Flow, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0961-5539

Keywords

Article
Publication date: 8 September 2023

Maryam Nasser Al-Nuaimi

Despite the ever-increasing importance of cultivating information, communication and technology literacy skills among college students, they have yet to be related to…

Abstract

Purpose

Despite the ever-increasing importance of cultivating information, communication and technology literacy skills among college students, they have yet to be related to comprehensive measuring instruments. A glance at the empirical literature reveals that most pertinent scales have been confined to measuring Internet literacy skills, whereas educators in the 21st century advocate an inclusive conceptualization of ICT literacy. Such a comprehensive conceptualization embodies technical, critical, cognitive and emotional competencies. Additionally, more empirical evidence is needed to indicate gaps in testing measurement invariance of ICT literacy scales across genders or cultures. To that end, the current study aims to adapt and cross-validate an ICT literacy self-efficacy scale across gender by testing the measurement invariance using a multiple-sampling confirmatory factor analysis (MCFA). Furthermore, the current study aims to verify the ICT literacy self-efficacy scale's psychometric properties to establish its construct validity and understand the scale's underlying factorial structure.

Design/methodology/approach

The current study has administered the scale to a cross-sectional sample of 3560 undergraduate students enrolled in six universities in the Sultanate of Oman.

Findings

The results have revealed that the ICT literacy self-efficacy exhibits satisfactory indices of construct validity. On the other hand, the results of MCFA demonstrate that the differences in the goodness of fit indices between the nested models and the baseline model were below the cut-off criterion of 0.01, indicating invariance. Therefore, the scale has proved to be amenable for comparing genders on their ICT literacy self-efficacy using an one-way multivariate analysis of variance.

Originality/value

The study has several implications for research and pedagogical practices. The study provides empirical evidence for establishing ICT literacy self-efficacy as a distinct high-domain construct of task-specific self-efficacy beliefs.

1 – 10 of over 12000