Search results

1 – 10 of over 1000
Open Access
Article
Publication date: 22 March 2021

Eunyoung Cho

In this paper, we show that there is a negative premium for MAX stocks in the Korean stock market. However, there is no evidence that the MAX effect overwhelms the effects of…

Abstract

In this paper, we show that there is a negative premium for MAX stocks in the Korean stock market. However, there is no evidence that the MAX effect overwhelms the effects of idiosyncratic risk. When we control for idiosyncratic risk, the negative relationship between extreme returns and future returns is less robust. Rather, the cross-effect of the extreme returns and the idiosyncratic risk factors explains the negative premium. Furthermore, our results are not fully explained by the exposure to the market timing and economic state. Overall, both the extreme return and idiosyncratic risk effects appear to coexist in the Korean stock market, but they are not independently.

Details

Journal of Derivatives and Quantitative Studies: 선물연구, vol. 29 no. 1
Type: Research Article
ISSN: 1229-988X

Keywords

Open Access
Article
Publication date: 27 November 2023

J.I. Ramos and Carmen María García López

The purpose of this paper is to analyze numerically the blowup in finite time of the solutions to a one-dimensional, bidirectional, nonlinear wave model equation for the…

243

Abstract

Purpose

The purpose of this paper is to analyze numerically the blowup in finite time of the solutions to a one-dimensional, bidirectional, nonlinear wave model equation for the propagation of small-amplitude waves in shallow water, as a function of the relaxation time, linear and nonlinear drift, power of the nonlinear advection flux, viscosity coefficient, viscous attenuation, and amplitude, smoothness and width of three types of initial conditions.

Design/methodology/approach

An implicit, first-order accurate in time, finite difference method valid for semipositive relaxation times has been used to solve the equation in a truncated domain for three different initial conditions, a first-order time derivative initially equal to zero and several constant wave speeds.

Findings

The numerical experiments show a very rapid transient from the initial conditions to the formation of a leading propagating wave, whose duration depends strongly on the shape, amplitude and width of the initial data as well as on the coefficients of the bidirectional equation. The blowup times for the triangular conditions have been found to be larger than those for the Gaussian ones, and the latter are larger than those for rectangular conditions, thus indicating that the blowup time decreases as the smoothness of the initial conditions decreases. The blowup time has also been found to decrease as the relaxation time, degree of nonlinearity, linear drift coefficient and amplitude of the initial conditions are increased, and as the width of the initial condition is decreased, but it increases as the viscosity coefficient is increased. No blowup has been observed for relaxation times smaller than one-hundredth, viscosity coefficients larger than ten-thousandths, quadratic and cubic nonlinearities, and initial Gaussian, triangular and rectangular conditions of unity amplitude.

Originality/value

The blowup of a one-dimensional, bidirectional equation that is a model for the propagation of waves in shallow water, longitudinal displacement in homogeneous viscoelastic bars, nerve conduction, nonlinear acoustics and heat transfer in very small devices and/or at very high transfer rates has been determined numerically as a function of the linear and nonlinear drift coefficients, power of the nonlinear drift, viscosity coefficient, viscous attenuation, and amplitude, smoothness and width of the initial conditions for nonzero relaxation times.

Details

International Journal of Numerical Methods for Heat & Fluid Flow, vol. 34 no. 3
Type: Research Article
ISSN: 0961-5539

Keywords

Open Access
Article
Publication date: 1 September 2020

Kevin Alvarez and Vladik Kreinovich

The current pandemic is difficult to model – and thus difficult to control. In contrast to the previous epidemics, whose dynamics were smooth and well described by the existing…

2749

Abstract

Purpose

The current pandemic is difficult to model – and thus difficult to control. In contrast to the previous epidemics, whose dynamics were smooth and well described by the existing models, the statistics of the current pandemic are highly oscillating. The purpose of this paper is to explain these oscillations and to see how this explanation can be used to fight the epidemic.

Design/methodology/approach

The authors use an analogy with economic systems.

Findings

The authors show that these oscillations can be explained if we take into account the disease’s long incubation period – as a result of which our control measures are determined by outdated data, showing number of infected people two weeks ago. To better control the pandemic, the authors propose to use the experience of economics, where also the effect of different measures can be observed only after some time. In the past, this led to wild oscillations of the economy, with rapid growth periods followed by devastating crises. In time, economists learned how to smooth the cycles and thus to drastically decrease the corresponding negative effects. The authors hope that this experience can help fight the pandemic.

Originality/value

To the best of our knowledge, this is the first explanation of the highly oscillatory nature of this epidemic’s dynamics.

Details

Asian Journal of Economics and Banking, vol. 4 no. 3
Type: Research Article
ISSN: 2615-9821

Keywords

Open Access
Article
Publication date: 18 January 2022

Sara Antomarioni, Filippo Emanuele Ciarapica and Maurizio Bevilacqua

The research approach is based on the concept that a failure event is rarely random and is often generated by a chain of previous events connected by a sort of domino effect…

1081

Abstract

Purpose

The research approach is based on the concept that a failure event is rarely random and is often generated by a chain of previous events connected by a sort of domino effect. Thus, the purpose of this study is the optimal selection of the components to predictively maintain on the basis of their failure probability, under budget and time constraints.

Design/methodology/approach

Assets maintenance is a major challenge for any process industry. Thanks to the development of Big Data Analytics techniques and tools, data produced by such systems can be analyzed in order to predict their behavior. Considering the asset as a social system composed of several interacting components, in this work, a framework is developed to identify the relationships between component failures and to avoid them through the predictive replacement of critical ones: such relationships are identified through the Association Rule Mining (ARM), while their interaction is studied through the Social Network Analysis (SNA).

Findings

A case example of a process industry is presented to explain and test the proposed model and to discuss its applicability. The proposed framework provides an approach to expand upon previous work in the areas of prediction of fault events and monitoring strategy of critical components.

Originality/value

The novel combined adoption of ARM and SNA is proposed to identify the hidden interaction among events and to define the nature of such interactions and communities of nodes in order to analyze local and global paths and define the most influential entities.

Details

International Journal of Quality & Reliability Management, vol. 40 no. 3
Type: Research Article
ISSN: 0265-671X

Keywords

Open Access
Article
Publication date: 29 July 2020

Walaa M. El-Sayed, Hazem M. El-Bakry and Salah M. El-Sayed

Wireless sensor networks (WSNs) are periodically collecting data through randomly dispersed sensors (motes), which typically consume high energy in radio communication that mainly…

1360

Abstract

Wireless sensor networks (WSNs) are periodically collecting data through randomly dispersed sensors (motes), which typically consume high energy in radio communication that mainly leans on data transmission within the network. Furthermore, dissemination mode in WSN usually produces noisy values, incorrect measurements or missing information that affect the behaviour of WSN. In this article, a Distributed Data Predictive Model (DDPM) was proposed to extend the network lifetime by decreasing the consumption in the energy of sensor nodes. It was built upon a distributive clustering model for predicting dissemination-faults in WSN. The proposed model was developed using Recursive least squares (RLS) adaptive filter integrated with a Finite Impulse Response (FIR) filter, for removing unwanted reflections and noise accompanying of the transferred signals among the sensors, aiming to minimize the size of transferred data for providing energy efficient. The experimental results demonstrated that DDPM reduced the rate of data transmission to ∼20%. Also, it decreased the energy consumption to 95% throughout the dataset sample and upgraded the performance of the sensory network by about 19.5%. Thus, it prolonged the lifetime of the network.

Details

Applied Computing and Informatics, vol. 19 no. 1/2
Type: Research Article
ISSN: 2634-1964

Keywords

Open Access
Article
Publication date: 22 May 2019

Adrian Magdaş

The purpose of this paper is to study the coupled fixed point problem and the coupled best proximity problem for single-valued and multi-valued contraction type operators defined…

Abstract

The purpose of this paper is to study the coupled fixed point problem and the coupled best proximity problem for single-valued and multi-valued contraction type operators defined on cyclic representations of the space. The approach is based on fixed point results for appropriate operators generated by the initial problems.

Open Access
Article
Publication date: 2 December 2016

Juan Aparicio

The purpose of this paper is to provide an outline of the major contributions in the literature on the determination of the least distance in data envelopment analysis (DEA). The…

2258

Abstract

Purpose

The purpose of this paper is to provide an outline of the major contributions in the literature on the determination of the least distance in data envelopment analysis (DEA). The focus herein is primarily on methodological developments. Specifically, attention is mainly paid to modeling aspects, computational features, the satisfaction of properties and duality. Finally, some promising avenues of future research on this topic are stated.

Design/methodology/approach

DEA is a methodology based on mathematical programming for the assessment of relative efficiency of a set of decision-making units (DMUs) that use several inputs to produce several outputs. DEA is classified in the literature as a non-parametric method because it does not assume a particular functional form for the underlying production function and presents, in this sense, some outstanding properties: the efficiency of firms may be evaluated independently on the market prices of the inputs used and outputs produced; it may be easily used with multiple inputs and outputs; a single score of efficiency for each assessed organization is obtained; this technique ranks organizations based on relative efficiency; and finally, it yields benchmarking information. DEA models provide both benchmarking information and efficiency scores for each of the evaluated units when it is applied to a dataset of observations and variables (inputs and outputs). Without a doubt, this benchmarking information gives DEA a distinct advantage over other efficiency methodologies, such as stochastic frontier analysis (SFA). Technical inefficiency is typically measured in DEA as the distance between the observed unit and a “benchmarking” target on the estimated piece-wise linear efficient frontier. The choice of this target is critical for assessing the potential performance of each DMU in the sample, as well as for providing information on how to increase its performance. However, traditional DEA models yield targets that are determined by the “furthest” efficient projection to the evaluated DMU. The projected point on the efficient frontier obtained as such may not be a representative projection for the judged unit, and consequently, some authors in the literature have suggested determining closest targets instead. The general argument behind this idea is that closer targets suggest directions of enhancement for the inputs and outputs of the inefficient units that may lead them to the efficiency with less effort. Indeed, authors like Aparicio et al. (2007) have shown, in an application on airlines, that it is possible to find substantial differences between the targets provided by applying the criterion used by the traditional DEA models, and those obtained when the criterion of closeness is utilized for determining projection points on the efficient frontier. The determination of closest targets is connected to the calculation of the least distance from the evaluated unit to the efficient frontier of the reference technology. In fact, the former is usually computed through solving mathematical programming models associated with minimizing some type of distance (e.g. Euclidean). In this particular respect, the main contribution in the literature is the paper by Briec (1998) on Hölder distance functions, where formally technical inefficiency to the “weakly” efficient frontier is defined through mathematical distances.

Findings

All the interesting features of the determination of closest targets from a benchmarking point of view have generated, in recent times, the increasing interest of researchers in the calculation of the least distance to evaluate technical inefficiency (Aparicio et al., 2014a). So, in this paper, we present a general classification of published contributions, mainly from a methodological perspective, and additionally, we indicate avenues for further research on this topic. The approaches that we cite in this paper differ in the way that the idea of similarity is made operative. Similarity is, in this sense, implemented as the closeness between the values of the inputs and/or outputs of the assessed units and those of the obtained projections on the frontier of the reference production possibility set. Similarity may be measured through multiple distances and efficiency measures. In turn, the aim is to globally minimize DEA model slacks to determine the closest efficient targets. However, as we will show later in the text, minimizing a mathematical distance in DEA is not an easy task, as it is equivalent to minimizing the distance to the complement of a polyhedral set, which is not a convex set. This complexity will justify the existence of different alternatives for solving these types of models.

Originality/value

As we are aware, this is the first survey in this topic.

Details

Journal of Centrum Cathedra, vol. 9 no. 2
Type: Research Article
ISSN: 1851-6599

Keywords

Open Access
Article
Publication date: 19 April 2022

Liwei Ju, Zhe Yin, Qingqing Zhou, Li Liu, Yushu Pan and Zhongfu Tan

This study aims to form a new concept of power-to-gas-based virtual power plant (GVPP) and propose a low-carbon economic scheduling optimization model for GVPP considering carbon…

Abstract

Purpose

This study aims to form a new concept of power-to-gas-based virtual power plant (GVPP) and propose a low-carbon economic scheduling optimization model for GVPP considering carbon emission trading.

Design/methodology/approach

In view of the strong uncertainty of wind power and photovoltaic power generation in GVPP, the information gap decision theory (IGDT) is used to measure the uncertainty tolerance threshold under different expected target deviations of the decision-makers. To verify the feasibility and effectiveness of the proposed model, nine-node energy hub was selected as the simulation system.

Findings

GVPP can coordinate and optimize the output of electricity-to-gas and gas turbines according to the difference in gas and electricity prices in the electricity market and the natural gas market at different times. The IGDT method can be used to describe the impact of wind and solar uncertainty in GVPP. Carbon emission rights trading can increase the operating space of power to gas (P2G) and reduce the operating cost of GVPP.

Research limitations/implications

This study considers the electrical conversion and spatio-temporal calming characteristics of P2G, integrates it with VPP into GVPP and uses the IGDT method to describe the impact of wind and solar uncertainty and then proposes a GVPP near-zero carbon random scheduling optimization model based on IGDT.

Originality/value

This study designed a novel structure of the GVPP integrating P2G, gas storage device into the VPP and proposed a basic near-zero carbon scheduling optimization model for GVPP under the optimization goal of minimizing operating costs. At last, this study constructed a stochastic scheduling optimization model for GVPP.

Details

International Journal of Climate Change Strategies and Management, vol. 15 no. 2
Type: Research Article
ISSN: 1756-8692

Keywords

Open Access
Article
Publication date: 29 December 2021

M'Hamed El-Louh, Mohammed El Allali and Fatima Ezzaki

In this work, the authors are interested in the notion of vector valued and set valued Pettis integrable pramarts. The notion of pramart is more general than that of martingale…

Abstract

Purpose

In this work, the authors are interested in the notion of vector valued and set valued Pettis integrable pramarts. The notion of pramart is more general than that of martingale. Every martingale is a pramart, but the converse is not generally true.

Design/methodology/approach

In this work, the authors present several properties and convergence theorems for Pettis integrable pramarts with convex weakly compact values in a separable Banach space.

Findings

The existence of the conditional expectation of Pettis integrable mutifunctions indexed by bounded stopping times is provided. The authors prove the almost sure convergence in Mosco and linear topologies of Pettis integrable pramarts with values in (cwk(E)) the family of convex weakly compact subsets of a separable Banach space.

Originality/value

The purpose of the present paper is to present new properties and various new convergence results for convex weakly compact valued Pettis integrable pramarts in Banach space.

Details

Arab Journal of Mathematical Sciences, vol. 29 no. 2
Type: Research Article
ISSN: 1319-5166

Keywords

Open Access
Article
Publication date: 25 March 2021

Per Hilletofth, Movin Sequeira and Wendy Tate

This paper investigates the suitability of fuzzy-logic-based support tools for initial screening of manufacturing reshoring decisions.

1576

Abstract

Purpose

This paper investigates the suitability of fuzzy-logic-based support tools for initial screening of manufacturing reshoring decisions.

Design/methodology/approach

Two fuzzy-logic-based support tools are developed together with experts from a Swedish manufacturing firm. The first uses a complete rule base and the second a reduced rule base. Sixteen inference settings are used in both of the support tools.

Findings

The findings show that fuzzy-logic-based support tools are suitable for initial screening of manufacturing reshoring decisions. The developed support tools are capable of suggesting whether a reshoring decision should be further evaluated or not, based on six primary competitiveness criteria. In contrast to existing literature this research shows that it does not matter whether a complete or reduced rule base is used when it comes to accuracy. The developed support tools perform similarly with no statistically significant differences. However, since the interpretability is much higher when a reduced rule base is used and it require fewer resources to develop, the second tool is more preferable for initial screening purposes.

Research limitations/implications

The developed support tools are implemented at a primary-criteria level and to make them more applicable, they should also include the sub-criteria level. The support tools should also be expanded to not only consider competitiveness criteria, but also other criteria related to availability of resources and strategic orientation of the firm. This requires further research with regard to multi-stage architecture and automatic generation of fuzzy rules in the manufacturing reshoring domain.

Practical implications

The support tools help managers to invest their scarce time on the most promising reshoring projects and to make timely and resilient decisions by taking a holistic perspective on competitiveness. Practitioners are advised to choose the type of support tool based on the available data.

Originality/value

There is a general lack of decision support tools in the manufacturing reshoring domain. This paper addresses the gap by developing fuzzy-logic-based support tools for initial screening of manufacturing reshoring decisions.

Details

Industrial Management & Data Systems, vol. 121 no. 5
Type: Research Article
ISSN: 0263-5577

Keywords

1 – 10 of over 1000