Search results
1 – 10 of over 1000Mohammadreza Tavakoli Baghdadabad
We propose a risk factor for idiosyncratic entropy and explore the relationship between this factor and expected stock returns.
Abstract
Purpose
We propose a risk factor for idiosyncratic entropy and explore the relationship between this factor and expected stock returns.
Design/methodology/approach
We estimate a cross-sectional model of expected entropy that uses several common risk factors to predict idiosyncratic entropy.
Findings
We find a negative relationship between expected idiosyncratic entropy and returns. Specifically, the Carhart alpha of a low expected entropy portfolio exceeds the alpha of a high expected entropy portfolio by −2.37% per month. We also find a negative and significant price of expected idiosyncratic entropy risk using the Fama-MacBeth cross-sectional regressions. Interestingly, expected entropy helps us explain the idiosyncratic volatility puzzle that stocks with high idiosyncratic volatility earn low expected returns.
Originality/value
We propose a risk factor of idiosyncratic entropy and explore the relationship between this factor and expected stock returns. Interestingly, expected entropy helps us explain the idiosyncratic volatility puzzle that stocks with high idiosyncratic volatility earn low expected returns.
Details
Keywords
Travis Fried, Anne Victoria Goodchild, Ivan Sanchez-Diaz and Michael Browne
Despite large bodies of research related to the impacts of e-commerce on last-mile logistics and sustainability, there has been limited effort to evaluate urban freight using an…
Abstract
Purpose
Despite large bodies of research related to the impacts of e-commerce on last-mile logistics and sustainability, there has been limited effort to evaluate urban freight using an equity lens. Therefore, this study proposes a modeling framework that enables researchers and planners to estimate the baseline equity performance of a major e-commerce platform and evaluate equity impacts of possible urban freight management strategies. The study also analyzes the sensitivity of various operational decisions to mitigate bias in the analysis.
Design/methodology/approach
The model adapts empirical methodologies from activity-based modeling, transport equity evaluation, and residential freight trip generation (RFTG) to estimate person- and household-level delivery demand and cargo van traffic exposure in 41 U.S. Metropolitan Statistical Areas (MSAs).
Findings
Evaluating 12 measurements across varying population segments and spatial units, the study finds robust evidence for racial and socio-economic inequities in last-mile delivery for low-income and, especially, populations of color (POC). By the most conservative measurement, POC are exposed to roughly 35% more cargo van traffic than white populations on average, despite ordering less than half as many packages. The study explores the model’s utility by evaluating a simple scenario that finds marginal equity gains for urban freight management strategies that prioritize line-haul efficiency improvements over those improving intra-neighborhood circulations.
Originality/value
Presents a first effort in building a modeling framework for more equitable decision-making in last-mile delivery operations and broader city planning.
Details
Keywords
Armin Mahmoodi, Leila Hashemi, Amin Mahmoodi, Benyamin Mahmoodi and Milad Jasemi
The proposed model has been aimed to predict stock market signals by designing an accurate model. In this sense, the stock market is analysed by the technical analysis of Japanese…
Abstract
Purpose
The proposed model has been aimed to predict stock market signals by designing an accurate model. In this sense, the stock market is analysed by the technical analysis of Japanese Candlestick, which is combined by the following meta heuristic algorithms: support vector machine (SVM), meta-heuristic algorithms, particle swarm optimization (PSO), imperialist competition algorithm (ICA) and genetic algorithm (GA).
Design/methodology/approach
In addition, among the developed algorithms, the most effective one is chosen to determine probable sell and buy signals. Moreover, the authors have proposed comparative results to validate the designed model in this study with the same basic models of three articles in the past. Hence, PSO is used as a classification method to search the solution space absolutelyand with the high speed of running. In terms of the second model, SVM and ICA are examined by the time. Where the ICA is an improver for the SVM parameters. Finally, in the third model, SVM and GA are studied, where GA acts as optimizer and feature selection agent.
Findings
Results have been indicated that, the prediction accuracy of all new models are high for only six days, however, with respect to the confusion matrixes results, it is understood that the SVM-GA and SVM-ICA models have correctly predicted more sell signals, and the SCM-PSO model has correctly predicted more buy signals. However, SVM-ICA has shown better performance than other models considering executing the implemented models.
Research limitations/implications
In this study, the authors to analyze the data the long length of time between the years 2013–2021, makes the input data analysis challenging. They must be changed with respect to the conditions.
Originality/value
In this study, two methods have been developed in a candlestick model, they are raw based and signal-based approaches which the hit rate is determined by the percentage of correct evaluations of the stock market for a 16-day period.
Details
Keywords
Leandro Pinheiro Vieira and Rafael Mesquita Pereira
This study aims to investigate the effect of smoking on the income of workers in the Brazilian labor market.
Abstract
Purpose
This study aims to investigate the effect of smoking on the income of workers in the Brazilian labor market.
Design/methodology/approach
Using data from the 2019 National Health Survey (PNS), we initially address the sample selection bias concerning labor market participation by using the Heckman (1979) method. Subsequently, the decomposition of income between smokers and nonsmokers is analyzed, both on average and across the earnings distribution by employing the procedure of Firpo, Fortin, and Lemieux (2009) - FFL decomposition. Ñopo (2008) technique is also used to obtain more robust estimates.
Findings
Overall, the findings indicate an income penalty for smokers in the Brazilian labor market across both the average and all quantiles of the income distribution. Notably, the most significant differentials and income penalties against smokers are observed in the lower quantiles of the distribution. Conversely, in the higher quantiles, there is a tendency toward a smaller magnitude of this gap, with limited evidence of an income penalty associated with this habit.
Research limitations/implications
This study presents an important limitation, which refers to a restriction of the PNS (2019), which does not provide information about some subjective factors that also tend to influence the levels of labor income, such as the level of effort and specific ability of each worker, whether smokers or not, something that could also, in some way, be related to some latent individual predisposition that would influence the choice of smoking.
Originality/value
The relevance of the present study is clear in identifying the heterogeneity of the income gap in favor of nonsmokers, as in the lower quantiles there was a greater magnitude of differentials against smokers and a greater incidence of unexplained penalties in the income of these workers, while in the higher quantiles, there was low magnitude of the differentials and little evidence that there is a penalty in earnings since the worker is a smoker.
Details
Keywords
Babitha Philip and Hamad AlJassmi
To proactively draw efficient maintenance plans, road agencies should be able to forecast main road distress parameters, such as cracking, rutting, deflection and International…
Abstract
Purpose
To proactively draw efficient maintenance plans, road agencies should be able to forecast main road distress parameters, such as cracking, rutting, deflection and International Roughness Index (IRI). Nonetheless, the behavior of those parameters throughout pavement life cycles is associated with high uncertainty, resulting from various interrelated factors that fluctuate over time. This study aims to propose the use of dynamic Bayesian belief networks for the development of time-series prediction models to probabilistically forecast road distress parameters.
Design/methodology/approach
While Bayesian belief network (BBN) has the merit of capturing uncertainty associated with variables in a domain, dynamic BBNs, in particular, are deemed ideal for forecasting road distress over time due to its Markovian and invariant transition probability properties. Four dynamic BBN models are developed to represent rutting, deflection, cracking and IRI, using pavement data collected from 32 major road sections in the United Arab Emirates between 2013 and 2019. Those models are based on several factors affecting pavement deterioration, which are classified into three categories traffic factors, environmental factors and road-specific factors.
Findings
The four developed performance prediction models achieved an overall precision and reliability rate of over 80%.
Originality/value
The proposed approach provides flexibility to illustrate road conditions under various scenarios, which is beneficial for pavement maintainers in obtaining a realistic representation of expected future road conditions, where maintenance efforts could be prioritized and optimized.
Details
Keywords
Vladimir Dženopoljac, Jasmina Ognjanović, Aleksandra Dženopoljac and Sascha Kraus
The employer brand is a crucial intangible asset for companies as it enhances the employer–employee relationship, leading to improved employee performance and overall company…
Abstract
Purpose
The employer brand is a crucial intangible asset for companies as it enhances the employer–employee relationship, leading to improved employee performance and overall company outcomes. This paper aims to investigate the contribution of the employer brand to the financial results of companies in southern Europe.
Design/methodology/approach
The sample consists of 266 companies operating in southern European countries during the year 2020. Secondary data on employer brand attributes, assessed from the perspective of current employees, were collected from the Glassdoor platform. Financial indicators were obtained from the companies' annual financial reports. The research hypotheses were tested using regression analysis.
Findings
The results of the regression analysis support the notion that the employer brand contributes to profitability indicators and management effectiveness indicators of southern European companies. However, the study did not find evidence supporting the contribution of the employer brand to market indicators and financial structure indicators of the observed companies.
Originality/value
This study is one of the first empirical investigations to assess the role of the employer brand as a human capital tool for enhancing the financial performance of companies in southern Europe. The study examines employer brand attributes from the perspective of current employees, who actively participate in shaping the employer brand and the company's image. In contrast to prior research, this study incorporates a more extensive set of financial indicators, categorized into four groups: profitability indicators, management effectiveness indicators, market indicators and financial structure indicators.
Details
Keywords
Daniel Francois Dörfling and Euphemia Godspower-Akpomiemie
This study aims to identify the propensity for clients (legal and natural persons) to adopt peer-to-peer (P2P) short-term insurance policies as opposed to traditional and/or…
Abstract
Purpose
This study aims to identify the propensity for clients (legal and natural persons) to adopt peer-to-peer (P2P) short-term insurance policies as opposed to traditional and/or centralized short-term.
Design/methodology/approach
In this paper data was collected through a survey of 102 sampled short-term insurance clients using convenience sampling. The TAM2 questionnaire was adapted to evaluate the intention to adopt a P2P insurance policy.
Findings
The findings of this study shed light on the factors influencing the adoption and (dis)continuation of short-term insurance products, both traditional and digital, among South African consumers. The results demonstrate that perceived usefulness, ease of use, trust, risk perception and subjective norm play crucial roles in individuals' intention to use or (dis)continue the use of these insurance products.
Practical implications
The study's findings provide actionable insights for practitioners in the short-term insurance sector, with a focus on marketers and e-commerce professionals. These insights emphasize the need to prioritize user-friendly design and trust-building measures in the development of P2P insurance systems. Additionally, practitioners should consider harnessing the power of social influence and carefully balancing innovative features with familiarity in their marketing efforts. These strategies are poised to enhance the adoption and competitive positioning of P2P insurance solutions amidst the evolving landscape of digital transformation.
Originality/value
This study makes a substantial contribution by employing the technology acceptance model (TAM) in a novel and unconventional manner. It not only explicates the intricate dynamics governing the adoption and discontinuation of short-term insurance products, encompassing both conventional and digital alternatives, within the South African consumer milieu but also extends its purview to infer the reasons behind the limited widespread adoption of the digital counterpart, despite its superior value proposition compared to the traditional offering. The findings elucidate the critical determinants shaping individuals' decisions in this dynamic market segment. This research enhances the global discourse on insurance adoption with a unique South African perspective and furnishes insurers and marketers with empirically grounded insights to optimize their strategies and cultivate substantive connections with their target demographic.
Details
Keywords
Djordje Cica, Branislav Sredanovic, Sasa Tesic and Davorin Kramar
Sustainable manufacturing is one of the most important and most challenging issues in present industrial scenario. With the intention of diminish negative effects associated with…
Abstract
Sustainable manufacturing is one of the most important and most challenging issues in present industrial scenario. With the intention of diminish negative effects associated with cutting fluids, the machining industries are continuously developing technologies and systems for cooling/lubricating of the cutting zone while maintaining machining efficiency. In the present study, three regression based machine learning techniques, namely, polynomial regression (PR), support vector regression (SVR) and Gaussian process regression (GPR) were developed to predict machining force, cutting power and cutting pressure in the turning of AISI 1045. In the development of predictive models, machining parameters of cutting speed, depth of cut and feed rate were considered as control factors. Since cooling/lubricating techniques significantly affects the machining performance, prediction model development of quality characteristics was performed under minimum quantity lubrication (MQL) and high-pressure coolant (HPC) cutting conditions. The prediction accuracy of developed models was evaluated by statistical error analyzing methods. Results of regressions based machine learning techniques were also compared with probably one of the most frequently used machine learning method, namely artificial neural networks (ANN). Finally, a metaheuristic approach based on a neural network algorithm was utilized to perform an efficient multi-objective optimization of process parameters for both cutting environment.
Details
Keywords
Xiaojie Xu and Yun Zhang
For policymakers and participants of financial markets, predictions of trading volumes of financial indices are important issues. This study aims to address such a prediction…
Abstract
Purpose
For policymakers and participants of financial markets, predictions of trading volumes of financial indices are important issues. This study aims to address such a prediction problem based on the CSI300 nearby futures by using high-frequency data recorded each minute from the launch date of the futures to roughly two years after constituent stocks of the futures all becoming shortable, a time period witnessing significantly increased trading activities.
Design/methodology/approach
In order to answer questions as follows, this study adopts the neural network for modeling the irregular trading volume series of the CSI300 nearby futures: are the research able to utilize the lags of the trading volume series to make predictions; if this is the case, how far can the predictions go and how accurate can the predictions be; can this research use predictive information from trading volumes of the CSI300 spot and first distant futures for improving prediction accuracy and what is the corresponding magnitude; how sophisticated is the model; and how robust are its predictions?
Findings
The results of this study show that a simple neural network model could be constructed with 10 hidden neurons to robustly predict the trading volume of the CSI300 nearby futures using 1–20 min ahead trading volume data. The model leads to the root mean square error of about 955 contracts. Utilizing additional predictive information from trading volumes of the CSI300 spot and first distant futures could further benefit prediction accuracy and the magnitude of improvements is about 1–2%. This benefit is particularly significant when the trading volume of the CSI300 nearby futures is close to be zero. Another benefit, at the cost of the model becoming slightly more sophisticated with more hidden neurons, is that predictions could be generated through 1–30 min ahead trading volume data.
Originality/value
The results of this study could be used for multiple purposes, including designing financial index trading systems and platforms, monitoring systematic financial risks and building financial index price forecasting.
Details
Keywords
Hussein Y.H. Alnajjar and Osman Üçüncü
Artificial intelligence (AI) models are demonstrating day by day that they can find long-term solutions to improve wastewater treatment efficiency. Artificial neural networks…
Abstract
Purpose
Artificial intelligence (AI) models are demonstrating day by day that they can find long-term solutions to improve wastewater treatment efficiency. Artificial neural networks (ANNs) are one of the most important of these models, and they are increasingly being used to forecast water resource variables. The goal of this study was to create an ANN model to estimate the removal efficiency of biological oxygen demand (BOD), total nitrogen (TN), total phosphorus (TP) and total suspended solids (TSS) at the effluent of various primary and secondary treatment methods in a wastewater treatment plant (WWTP).
Design/methodology/approach
The MATLAB App Designer model was used to generate the data set. Various combinations of wastewater quality data, such as temperature(T), TN, TP and hydraulic retention time (HRT) are used as inputs into the ANN to assess the degree of effect of each of these variables on BOD, TN, TP and TSS removal efficiency. Two of the models reflect two different types of primary treatment, while the other nine models represent different types of subsequent treatment. The ANN model’s findings are compared to the MATLAB App Designer model. For evaluating model performance, mean square error (MSE) and coefficient of determination statistics (R2) are utilized as comparative metrics.
Findings
For both training and testing, the R values for the ANN models were greater than 0.99. Based on the comparisons, it was discovered that the ANN model can be used to estimate the removal efficiency of BOD, TN, TP and TSS in WWTP and that the ANN model produces very similar and satisfying results to the APPDESIGNER model. The R-value (Correlation coefficient) of 0.9909 and the MSE of 5.962 indicate that the model is accurate. Because of the many benefits of the ANN models used in this study, it has a lot of potential as a general modeling tool for a range of other complicated process systems that are difficult to solve using conventional modeling techniques.
Originality/value
The objective of this study was to develop an ANN model that could be used to estimate the removal efficiency of pollutants such as BOD, TN, TP and TSS at the effluent of various primary and secondary treatment methods in a WWTP. In the future, the ANN could be used to design a new WWTP and forecast the removal efficiency of pollutants.
Details