Search results

1 – 10 of 89
Open Access
Article
Publication date: 16 October 2018

Maximilian Schniedenharn, Frederik Wiedemann and Johannes Henrich Schleifenbaum

The purpose of this paper is to introduce an approach in measuring the shielding gas flow within laser powder bed fusion (L-PBF) machines under near-process conditions (regarding…

2793

Abstract

Purpose

The purpose of this paper is to introduce an approach in measuring the shielding gas flow within laser powder bed fusion (L-PBF) machines under near-process conditions (regarding oxygen content and shielding gas flow).

Design/methodology/approach

The measurements are made sequentially using a hot-wire anemometer. After a short introduction into the measurement technique, the system which places the measurement probe within the machine is described. Finally, the measured shielding gas flow of a commercial L-PBF machine is presented.

Findings

An approach to measure the shielding gas flow within SLM machines has been developed and successfully tested. The use of a thermal anemometer along with an automated probe-placement system enables the space-resolved measurement of the flow speed and its turbulence.

Research limitations/implications

The used single-normal (SN) hot-wire anemometer does not provide the flow vectors’ orientation. Using a probe with two or three hot-films and an improved placement system will provide more information about the flow and less disturbance to it.

Originality/value

A measurement system which allows the measurement of the shielding gas flow within commercial L-PBF machines is presented. This enables the correlation of the shielding gas flow with the resulting parts’ quality.

Details

Rapid Prototyping Journal, vol. 24 no. 8
Type: Research Article
ISSN: 1355-2546

Keywords

Open Access
Article
Publication date: 12 June 2019

Bidit Lal Dey, Sharifah Alwi, Fred Yamoah, Stephanie Agyepongmaa Agyepong, Hatice Kizgin and Meera Sarma

While it is essential to further research the growing diversity in western metropolitan cities, little is currently known about how the members of various ethnic communities…

9661

Abstract

Purpose

While it is essential to further research the growing diversity in western metropolitan cities, little is currently known about how the members of various ethnic communities acculturate to multicultural societies. The purpose of this paper is to explore immigrants’ cosmopolitanism and acculturation strategies through an analysis of the food consumption behaviour of ethnic consumers in multicultural London.

Design/methodology/approach

The study was set within the socio-cultural context of London. A number of qualitative methods such as in-depth interviews, observation and photographs were used to assess consumers’ acculturation strategies in a multicultural environment and how that is influenced by consumer cosmopolitanism.

Findings

Ethnic consumers’ food consumption behaviour reflects their acculturation strategies, which can be classified into four groups: rebellion, rarefaction, resonance and refrainment. This classification demonstrates ethnic consumers’ multi-directional acculturation strategies, which are also determined by their level of cosmopolitanism.

Research limitations/implications

The taxonomy presented in this paper advances current acculturation scholarship by suggesting a multi-directional model for acculturation strategies as opposed to the existing uni-directional and bi-directional perspectives and explicates the role of consumer cosmopolitanism in consumer acculturation. The paper did not engage host communities and there is hence a need for future research on how and to what extent host communities are acculturated to the multicultural environment.

Practical implications

The findings have direct implications for the choice of standardisation vs adaptation as a marketing strategy within multicultural cities. Whilst the rebellion group are more likely to respond to standardisation, increasing adaptation of goods and service can ideally target members of the resistance and resonance groups and more fusion products should be exclusively earmarked for the resonance group.

Originality/value

The paper makes original contribution by introducing a multi-directional perspective to acculturation by delineating four-group taxonomy (rebellion, rarefaction, resonance and refrainment). This paper also presents a dynamic model that captures how consumer cosmopolitanism impinges upon the process and outcome of multi-directional acculturation strategies.

Details

International Marketing Review, vol. 36 no. 5
Type: Research Article
ISSN: 0265-1335

Keywords

Open Access
Article
Publication date: 7 November 2023

Cristian Barra and Pasquale Marcello Falcone

The paper aims at addressing the following research questions: does institutional quality improve countries' environmental efficiency? And which pillars of institutional quality…

Abstract

Purpose

The paper aims at addressing the following research questions: does institutional quality improve countries' environmental efficiency? And which pillars of institutional quality improve countries' environmental efficiency?

Design/methodology/approach

By specifying a directional distance function in the context of stochastic frontier method where GHG emissions are considered as the bad output and the GDP is referred as the desirable one, the work computes the environmental efficiency into the appraisal of a production function for the European countries over three decades.

Findings

According to the countries' performance, the findings confirm that high and upper middle-income countries have higher environmental efficiency compared to low middle-income countries. In this environmental context, the role of institutional quality turns out to be really important in improving the environmental efficiency for high income countries.

Originality/value

This article attempts to analyze the role of different dimensions of institutional quality in different European countries' performance – in terms of mitigating GHGs (undesirable output) – while trying to raise their economic performance through their GDP (desirable output).

Highlights

  1. The paper aims at addressing the following research question: does institutional quality improve countries' environmental efficiency?

  2. We adopt a directional distance function in the context of stochastic frontier method, considering 40 European economies over a 30-year time interval.

  3. The findings confirm that high and upper middle-income countries have higher environmental efficiency compared to low middle-income countries.

  4. The role of institutional quality turns out to be really important in improving the environmental efficiency for high income countries, while the performance decreases for the low middle-income countries.

The paper aims at addressing the following research question: does institutional quality improve countries' environmental efficiency?

We adopt a directional distance function in the context of stochastic frontier method, considering 40 European economies over a 30-year time interval.

The findings confirm that high and upper middle-income countries have higher environmental efficiency compared to low middle-income countries.

The role of institutional quality turns out to be really important in improving the environmental efficiency for high income countries, while the performance decreases for the low middle-income countries.

Details

Journal of Economic Studies, vol. 51 no. 9
Type: Research Article
ISSN: 0144-3585

Keywords

Open Access
Article
Publication date: 6 October 2022

Peterson K. Ozili

This paper presents an overview of embedded finance. It identifies the applications, use case examples, benefits and challenges of embedded finance. The paper also analyzes global…

3667

Abstract

Purpose

This paper presents an overview of embedded finance. It identifies the applications, use case examples, benefits and challenges of embedded finance. The paper also analyzes global interest in embedded finance and compares it with interest in related finance concepts such as open finance, open banking, decentralized finance, financial innovation, Fintech and digital finance.

Design/methodology/approach

Granger causality test and two-stage least square regression were used to assess interest over time in embedded finance.

Findings

The empirical result show that interest in embedded finance increased significantly during the COVID-19 pandemic. The United States, the United Kingdom and India witnessed the highest interest in embedded finance compared to other countries. There is bi-directional Granger causality between interest in information about embedded finance and interest in information about financial innovation. There is uni-directional Granger causality between interest in information about embedded finance and interest in information about digital finance and open finance. The findings also reveal that interest in decentralized finance and open finance are significant determinants of interest in embedded finance. On the other hand, interest in embedded finance is a significant determinant of interest in digital finance, decentralized finance, Fintech and open banking. Also, interest in embedded finance is significantly correlated with interest in digital finance, decentralized finance, open banking and Fintech.

Originality/value

Presently, there is little academic interest in embedded finance despite the fact that embedded finance is part of the on-going digital finance revolution. This paper fills this gap in the literature by assessing the benefits, use case, challenges of embedded finance.

Details

Journal of Internet and Digital Economics, vol. 2 no. 2
Type: Research Article
ISSN: 2752-6356

Keywords

Open Access
Article
Publication date: 2 December 2016

Juan Aparicio

The purpose of this paper is to provide an outline of the major contributions in the literature on the determination of the least distance in data envelopment analysis (DEA). The…

2216

Abstract

Purpose

The purpose of this paper is to provide an outline of the major contributions in the literature on the determination of the least distance in data envelopment analysis (DEA). The focus herein is primarily on methodological developments. Specifically, attention is mainly paid to modeling aspects, computational features, the satisfaction of properties and duality. Finally, some promising avenues of future research on this topic are stated.

Design/methodology/approach

DEA is a methodology based on mathematical programming for the assessment of relative efficiency of a set of decision-making units (DMUs) that use several inputs to produce several outputs. DEA is classified in the literature as a non-parametric method because it does not assume a particular functional form for the underlying production function and presents, in this sense, some outstanding properties: the efficiency of firms may be evaluated independently on the market prices of the inputs used and outputs produced; it may be easily used with multiple inputs and outputs; a single score of efficiency for each assessed organization is obtained; this technique ranks organizations based on relative efficiency; and finally, it yields benchmarking information. DEA models provide both benchmarking information and efficiency scores for each of the evaluated units when it is applied to a dataset of observations and variables (inputs and outputs). Without a doubt, this benchmarking information gives DEA a distinct advantage over other efficiency methodologies, such as stochastic frontier analysis (SFA). Technical inefficiency is typically measured in DEA as the distance between the observed unit and a “benchmarking” target on the estimated piece-wise linear efficient frontier. The choice of this target is critical for assessing the potential performance of each DMU in the sample, as well as for providing information on how to increase its performance. However, traditional DEA models yield targets that are determined by the “furthest” efficient projection to the evaluated DMU. The projected point on the efficient frontier obtained as such may not be a representative projection for the judged unit, and consequently, some authors in the literature have suggested determining closest targets instead. The general argument behind this idea is that closer targets suggest directions of enhancement for the inputs and outputs of the inefficient units that may lead them to the efficiency with less effort. Indeed, authors like Aparicio et al. (2007) have shown, in an application on airlines, that it is possible to find substantial differences between the targets provided by applying the criterion used by the traditional DEA models, and those obtained when the criterion of closeness is utilized for determining projection points on the efficient frontier. The determination of closest targets is connected to the calculation of the least distance from the evaluated unit to the efficient frontier of the reference technology. In fact, the former is usually computed through solving mathematical programming models associated with minimizing some type of distance (e.g. Euclidean). In this particular respect, the main contribution in the literature is the paper by Briec (1998) on Hölder distance functions, where formally technical inefficiency to the “weakly” efficient frontier is defined through mathematical distances.

Findings

All the interesting features of the determination of closest targets from a benchmarking point of view have generated, in recent times, the increasing interest of researchers in the calculation of the least distance to evaluate technical inefficiency (Aparicio et al., 2014a). So, in this paper, we present a general classification of published contributions, mainly from a methodological perspective, and additionally, we indicate avenues for further research on this topic. The approaches that we cite in this paper differ in the way that the idea of similarity is made operative. Similarity is, in this sense, implemented as the closeness between the values of the inputs and/or outputs of the assessed units and those of the obtained projections on the frontier of the reference production possibility set. Similarity may be measured through multiple distances and efficiency measures. In turn, the aim is to globally minimize DEA model slacks to determine the closest efficient targets. However, as we will show later in the text, minimizing a mathematical distance in DEA is not an easy task, as it is equivalent to minimizing the distance to the complement of a polyhedral set, which is not a convex set. This complexity will justify the existence of different alternatives for solving these types of models.

Originality/value

As we are aware, this is the first survey in this topic.

Details

Journal of Centrum Cathedra, vol. 9 no. 2
Type: Research Article
ISSN: 1851-6599

Keywords

Open Access
Article
Publication date: 15 March 2024

Anis Jarboui, Emna Mnif, Nahed Zghidi and Zied Akrout

In an era marked by heightened geopolitical uncertainties, such as international conflicts and economic instability, the dynamics of energy markets assume paramount importance…

Abstract

Purpose

In an era marked by heightened geopolitical uncertainties, such as international conflicts and economic instability, the dynamics of energy markets assume paramount importance. Our study delves into this complex backdrop, focusing on the intricate interplay the between traditional and emerging energy sectors.

Design/methodology/approach

This study analyzes the interconnections among green financial assets, renewable energy markets, the geopolitical risk index and cryptocurrency carbon emissions from December 19, 2017 to February 15, 2023. We investigate these relationships using a novel time-frequency connectedness approach and machine learning methodology.

Findings

Our findings reveal that green energy stocks, except the PBW, exhibit the highest net transmission of volatility, followed by COAL. In contrast, CARBON emerges as the primary net recipient of volatility, followed by fuel energy assets. The frequency decomposition results also indicate that the long-term components serve as the primary source of directional volatility spillover, suggesting that volatility transmission among green stocks and energy assets tends to occur over a more extended period. The SHapley additive exPlanations (SHAP) results show that the green and fuel energy markets are negatively connected with geopolitical risks (GPRs). The results obtained through the SHAP analysis confirm the novel time-varying parameter vector autoregressive (TVP-VAR) frequency connectedness findings. The CARBON and PBW markets consistently experience spillover shocks from other markets in short and long-term horizons. The role of crude oil as a receiver or transmitter of shocks varies over time.

Originality/value

Green financial assets and clean energy play significant roles in the financial markets and reduce geopolitical risk. Our study employs a time-frequency connectedness approach to assess the interconnections among four markets' families: fuel, renewable energy, green stocks and carbon markets. We utilize the novel TVP-VAR approach, which allows for flexibility and enables us to measure net pairwise connectedness in both short and long-term horizons.

Details

Arab Gulf Journal of Scientific Research, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 1985-9899

Keywords

Open Access
Article
Publication date: 20 June 2022

Achraf Ghorbel, Sahar Loukil and Walid Bahloul

This paper analyzes the connectedness with network among the major cryptocurrencies, the G7 stock indexes and the gold price over the coronavirus disease 2019 (COVID-19) pandemic…

2360

Abstract

Purpose

This paper analyzes the connectedness with network among the major cryptocurrencies, the G7 stock indexes and the gold price over the coronavirus disease 2019 (COVID-19) pandemic period, in 2020.

Design/methodology/approach

This study used a multivariate approach proposed by Diebold and Yilmaz (2009, 2012 and 2014).

Findings

For a stock index portfolio, the results of static connectedness showed a higher independence between the stock markets during the COVID-19 crisis. It is worth noting that in general, cryptocurrencies are diversifiers for a stock index portfolio, which enable to reduce volatility especially in the crisis period. Dynamic connectedness results do not significantly differ from those of the static connectedness, the authors just mention that the Bitcoin Gold becomes a net receiver. The scope of connectedness was maintained after the shock for most of the cryptocurrencies, except for the Dash and the Bitcoin Gold, which joined a previous level. In fact, the Bitcoin has always been the biggest net transmitter of volatility connectedness or spillovers during the crisis period. Maker is the biggest net-receiver of volatility from the global system. As for gold, the authors notice that it has remained a net receiver with a significant increase in the network reception during the crisis period, which confirms its safe haven.

Originality/value

Overall, the authors conclude that connectedness is shown to be conditional on the extent of economic and financial uncertainties marked by the propagation of the coronavirus while the Bitcoin Gold and Litecoin are the least receivers, leading to the conclusion that they can be diversifiers.

研究目的

本文分析於2020年2019冠狀病毒病肆虐期間、主要的加密貨幣、七國集團 (G7) 股價指數與黃金價格三者之間在網絡上的連通性。

研究設計/方法/理念

分析使用迪博爾德和耶爾馬茲 (Diebold and Yilmaz (2009, 2012, 2014)) 提出的多變量分析法。

研究結果

就一個股票指數投資組合而言,靜態連結的結果顯示、在2019冠狀病毒病肆虐期間,股票市場之間有更高的獨立性。值得我們注意的是:一般來說,加密貨幣在股票指數投資組合起著多元化投資作用,這可減低不穩定性,尤其是在危機時期。動態連結的結果與靜態連結的結果沒有顯著的分別。我們剛提到、比特幣黃金已成為純接收者。除了處於先前水平的達世幣和比特幣黃金外,就大部分的加密貨幣而言,連通的範圍在衝擊後都得以維持。事實上,在這危機時期,比特幣一直是波動性連結或溢出的最大淨傳播者。掛單者 (Maker) 是從全球系統中出現的最大波動淨接收者。至於黃金,我們注意到在危機時期、它仍然是在網絡接收方面擁有顯著增長的淨接收者,這確認其為安全的避難所。

研究的原創性/價值

總的來說,我們的結論是:連通性被確認為取決於標誌著受廣泛傳播的冠狀病毒影響下的經濟和金融欠缺穩定的程度,而比特幣黃金和萊特幣則是最小的接收者,這帶出一個結論、就是:比特幣黃金和萊特幣、可以成為多元化投資項目。

Details

European Journal of Management and Business Economics, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 2444-8451

Keywords

Open Access
Article
Publication date: 18 April 2023

Worapan Kusakunniran, Pairash Saiviroonporn, Thanongchai Siriapisith, Trongtum Tongdee, Amphai Uraiverotchanakorn, Suphawan Leesakul, Penpitcha Thongnarintr, Apichaya Kuama and Pakorn Yodprom

The cardiomegaly can be determined by the cardiothoracic ratio (CTR) which can be measured in a chest x-ray image. It is calculated based on a relationship between a size of heart…

2660

Abstract

Purpose

The cardiomegaly can be determined by the cardiothoracic ratio (CTR) which can be measured in a chest x-ray image. It is calculated based on a relationship between a size of heart and a transverse dimension of chest. The cardiomegaly is identified when the ratio is larger than a cut-off threshold. This paper aims to propose a solution to calculate the ratio for classifying the cardiomegaly in chest x-ray images.

Design/methodology/approach

The proposed method begins with constructing lung and heart segmentation models based on U-Net architecture using the publicly available datasets with the groundtruth of heart and lung masks. The ratio is then calculated using the sizes of segmented lung and heart areas. In addition, Progressive Growing of GANs (PGAN) is adopted here for constructing the new dataset containing chest x-ray images of three classes including male normal, female normal and cardiomegaly classes. This dataset is then used for evaluating the proposed solution. Also, the proposed solution is used to evaluate the quality of chest x-ray images generated from PGAN.

Findings

In the experiments, the trained models are applied to segment regions of heart and lung in chest x-ray images on the self-collected dataset. The calculated CTR values are compared with the values that are manually measured by human experts. The average error is 3.08%. Then, the models are also applied to segment regions of heart and lung for the CTR calculation, on the dataset computed by PGAN. Then, the cardiomegaly is determined using various attempts of different cut-off threshold values. With the standard cut-off at 0.50, the proposed method achieves 94.61% accuracy, 88.31% sensitivity and 94.20% specificity.

Originality/value

The proposed solution is demonstrated to be robust across unseen datasets for the segmentation, CTR calculation and cardiomegaly classification, including the dataset generated from PGAN. The cut-off value can be adjusted to be lower than 0.50 for increasing the sensitivity. For example, the sensitivity of 97.04% can be achieved at the cut-off of 0.45. However, the specificity is decreased from 94.20% to 79.78%.

Details

Applied Computing and Informatics, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 2634-1964

Keywords

Open Access
Article
Publication date: 19 April 2022

Khurram Ejaz Chandia, Muhammad Badar Iqbal and Waseem Bahadur

This study aims to analyze the imbalances in the public finance structure of Pakistan’s economy and highlight the need for comprehensive reforms. Specifically, it aims to…

2049

Abstract

Purpose

This study aims to analyze the imbalances in the public finance structure of Pakistan’s economy and highlight the need for comprehensive reforms. Specifically, it aims to contribute to the empirical literature by analyzing the relationship between fiscal vulnerability, financial stress and macroeconomic policies in Pakistan’s economy between 1971 and 2020.

Design/methodology/approach

The study develops an index of fiscal vulnerability, an index of financial stress and an index of macroeconomic policies. The fiscal vulnerability index is based on the patterns of fiscal indicators resulting from past trends of the selected variables in Pakistan’s economy. The financial stress in Pakistan is caused from the financial disorders that are acknowledged in the composite index, which is based on variables with the potential to indicate periods of stress stemming from the foreign exchange market, the securities market and the monetary policy components. The macroeconomic policies index is developed to analyze the mechanism through which fiscal vulnerability and financial stress have influenced macroeconomic policies in Pakistan. The causal association between fiscal vulnerability, financial stress and macroeconomic policies is analyzed using the auto-regressive distributive lags approach.

Findings

There exists a long-run relationship between the three indices, and a bi-directional causality between fiscal vulnerability and macroeconomic policies.

Originality/value

This study contributes to the development of a fiscal monitoring mechanism, which has the basic purpose of analyzing the refinancing risk of public liabilities. Moreover, it focuses on fiscal vulnerability from a macroeconomic perspective. The study tries to develop a framework to assess fiscal vulnerability in light of “The Risk Octagon” theory, which focuses on three risk components: fiscal variables, macroeconomic-disruption-associated shocks and non-fiscal country-specific variables. The initial contribution of this work to the literature is to develop a framework (a fiscal vulnerability index, financial stress index and macroeconomic policies index) for effective and result-oriented macro-fiscal surveillance. Moreover, empirical literature emphasized and advised developing countries to develop their own capacity mechanisms to assess their fiscal vulnerability in light of the IMF guidelines regarding vulnerability assessments. This study thus attempts to fulfill the said gap identified in literature.

Details

Fulbright Review of Economics and Policy, vol. 2 no. 1
Type: Research Article
ISSN: 2635-0173

Keywords

Open Access
Article
Publication date: 13 December 2022

Marcelo Colaço, Fabio Bozzoli, Luca Cattani and Luca Pagliarini

The purpose of this paper is to apply the conjugate gradient (CG) method, together with the adjoint operator (AO) to the pulsating heat pipe problem, including some quite…

379

Abstract

Purpose

The purpose of this paper is to apply the conjugate gradient (CG) method, together with the adjoint operator (AO) to the pulsating heat pipe problem, including some quite interesting experimental results. The CG method, together with the AO, was able to estimate the unknown functions more efficiently than the other techniques presented in this paper. The estimation of local heat transfer coefficients, rather than the global ones, in pulsating heat pipes is a relatively new subject and presenting a robust, efficient and self-regularized inverse tool to estimate it, supported also by some experimental results, is the main purpose of this paper. To also increase the visibility and the general use of the paper to the heat transfer community, the authors include, as supplemental material, all numerical and experimental data used in this paper.

Design/methodology/approach

The approach was established on the solution of the inverse heat conduction problem in the wall by using as starting data the temperature measurements on the outer surface. The procedure is based on the CG method with AO. The here proposed approach was first verified adopting synthetic data and then it was validated with real cases regarding pulsating heat pipes.

Findings

An original fast methodology to estimate local convective heat flux is proposed. The procedure has been validated both numerically and experimentally. The procedure has been compared to other classical methods presenting some peculiar benefits.

Practical implications

The approach is suitable for pulsating heat pipes performance evaluation because these devices present a local heat flux distribution characterized by an important variation both in time and in space as a result of the complex flow patterns that are generated in this type of devices.

Originality/value

The procedure here proposed shows these benefits: it affords a general model of the heat conduction problem that is effortlessly customized for the particular case, it can be applied also to large datasets and it presents reduced computational expense.

Details

International Journal of Numerical Methods for Heat & Fluid Flow, vol. 33 no. 5
Type: Research Article
ISSN: 0961-5539

Keywords

1 – 10 of 89