Search results
1 – 10 of 78Mohd Irfan and Anup Kumar Sharma
A progressive hybrid censoring scheme (PHCS) becomes impractical for ensuring dependable outcomes when there is a low likelihood of encountering a small number of failures prior…
Abstract
Purpose
A progressive hybrid censoring scheme (PHCS) becomes impractical for ensuring dependable outcomes when there is a low likelihood of encountering a small number of failures prior to the predetermined terminal time T. The generalized progressive hybrid censoring scheme (GPHCS) efficiently addresses to overcome the limitation of the PHCS.
Design/methodology/approach
In this article, estimation of model parameter, survival and hazard rate of the Unit-Lindley distribution (ULD), when sample comes from the GPHCS, have been taken into account. The maximum likelihood estimator has been derived using Newton–Raphson iterative procedures. Approximate confidence intervals of the model parameter and their arbitrary functions are established by the Fisher information matrix. Bayesian estimation procedures have been derived using Metropolis–Hastings algorithm under squared error loss function. Convergence of Markov chain Monte Carlo (MCMC) samples has been examined. Various optimality criteria have been considered. An extensive Monte Carlo simulation analysis has been shown to compare and validating of the proposed estimation techniques.
Findings
The Bayesian MCMC approach to estimate the model parameters and reliability characteristics of the generalized progressive hybrid censored data of ULD is recommended. The authors anticipate that health data analysts and reliability professionals will get benefit from the findings and approaches presented in this study.
Originality/value
The ULD has a broad range of practical utility, making it a problem to estimate the model parameters as well as reliability characteristics and the significance of the GPHCS also encourage the authors to consider the present estimation problem because it has not previously been discussed in the literature.
Details
Keywords
Julio Urenda and Olga Kosheleva
While the main purpose of reporting – e.g. reporting for taxes – is to gauge the economic state of a company, the fact that reporting is done at pre-determined dates distorts the…
Abstract
Purpose
While the main purpose of reporting – e.g. reporting for taxes – is to gauge the economic state of a company, the fact that reporting is done at pre-determined dates distorts the reporting results. For example, to create a larger impression of their productivity, companies fire temporary workers before the reporting date and re-hire then right away. The purpose of this study is to decide how to avoid such distortion.
Design/methodology/approach
This study aims to come up with a solution which is applicable for all possible reasonable optimality criteria. Thus, a general formalism for describing and analyzing all such criteria is used.
Findings
This study shows that most distortion problems will disappear if the fixed pre-determined reporting dates are replaced with individualized random reporting dates. This study also shows that for all reasonable optimality criteria, the optimal way to assign reporting dates is to do it uniformly.
Research limitations/implications
This study shows that for all reasonable optimality criteria, the optimal way to assign reporting dates is to do it uniformly.
Practical implications
It is found that the individualized random tax reporting dates would be beneficial for economy.
Social implications
It is found that the individualized random tax reporting dates would be beneficial for society as a whole.
Originality/value
This study proposes a new idea of replacing the fixed pre-determining reporting dates with randomized ones. On the informal level, this idea may have been proposed earlier, but what is completely new is our analysis of which randomization of reporting dates is the best for economy: it turns out that under all reasonable optimality criteria, uniform randomization works the best.
Details
Keywords
Jorge Morvan Marotte Luz Filho and Antonio Andre Novotny
Topology optimization of structures under self-weight loading is a challenging problem which has received increasing attention in the past years. The use of standard formulations…
Abstract
Purpose
Topology optimization of structures under self-weight loading is a challenging problem which has received increasing attention in the past years. The use of standard formulations based on compliance minimization under volume constraint suffers from numerous difficulties for self-weight dominant scenarios, such as non-monotonic behaviour of the compliance, possible unconstrained character of the optimum and parasitic effects for low densities in density-based approaches. This paper aims to propose an alternative approach for dealing with topology design optimization of structures into three spatial dimensions subject to self-weight loading.
Design/methodology/approach
In order to overcome the above first two issues, a regularized formulation of the classical compliance minimization problem under volume constraint is adopted, which enjoys two important features: (a) it allows for imposing any feasible volume constraint and (b) the standard (original) formulation is recovered once the regularizing parameter vanishes. The resulting topology optimization problem is solved with the help of the topological derivative method, which naturally overcomes the above last issue since no intermediate densities (grey-scale) approach is necessary.
Findings
A novel and simple approach for dealing with topology design optimization of structures into three spatial dimensions subject to self-weight loading is proposed. A set of benchmark examples is presented, showing not only the effectiveness of the proposed approach but also highlighting the role of the self-weight loading in the final design, which are: (1) a bridge structure is subject to pure self-weight loading; (2) a truss-like structure is submitted to an external horizontal force (free of self-weight loading) and also to the combination of self-weight and the external horizontal loading; and (3) a tower structure is under dominant self-weight loading.
Originality/value
An alternative regularized formulation of the compliance minimization problem that naturally overcomes the difficulties of dealing with self-weight dominant scenarios; a rigorous derivation of the associated topological derivative; computational aspects of a simple FreeFEM implementation; and three-dimensional numerical benchmarks of bridge, truss-like and tower structures.
Details
Keywords
Mario Becerra, Matteo Balliauw, Peter Goos, Bruno De Borger, Benjamin Huyghe and Thomas Truyts
Ticket sales are an essential source of income for football clubs and federations. Analyzing the determinants of fans' willingness-to-pay for tickets is therefore an important…
Abstract
Purpose
Ticket sales are an essential source of income for football clubs and federations. Analyzing the determinants of fans' willingness-to-pay for tickets is therefore an important exercise. By knowing the match- and fan-related characteristics that influence how much a fan wants to pay for a ticket, as well as to what extent, football clubs and federations can modify their ticket offering and targeting in order to optimize this revenue stream.
Design/methodology/approach
Using a detailed discrete choice experiment, based on McFadden's random utility theory, this paper formulates a Bayesian hierarchical multinomial logit model. Such models are very common in the discrete choice modeling literature. The analysis identifies to what extent match and personal attributes influence fans' willingness-to-pay for games of the Belgian men's and women's football national teams.
Findings
The results show that the strength of the opponent, the type of competition, the location of the seats in the stadium, the day and kick-off time of the match and the ticket price exert an influence on the choice of the respondent. Fans are attracted most by competitive games against strong opponents. They prefer to sit along the sideline, and they have clear preferences for specific kick-off days and times. The authors also find substantial variation between socio-demographic groups, defined in terms of factors such as age, gender and family composition.
Practical implications
The authors use the results to estimate the willingness-to-pay for match tickets for different socio-demographic groups. Their findings are useful for football clubs and federations interested in optimizing the prices of their match tickets.
Originality/value
To the best of the authors' knowledge, no stated preference methods, such as discrete choice analysis, have been used to analyze the willingness-to-pay of sports fans. The advantage of discrete choice analysis is that options and variations in tickets that are not yet available in practice can be studied, allowing football organizations to increase revenues from new ticketing instruments.
Details
Keywords
Radha Subramanyam, Y. Adline Jancy and P. Nagabushanam
Cross-layer approach in media access control (MAC) layer will address interference and jamming problems. Hybrid distributed MAC can be used for simultaneous voice, data…
Abstract
Purpose
Cross-layer approach in media access control (MAC) layer will address interference and jamming problems. Hybrid distributed MAC can be used for simultaneous voice, data transmissions in wireless sensor network (WSN) and Internet of Things (IoT) applications. Choosing the correct objective function in Nash equilibrium for game theory will address fairness index and resource allocation to the nodes. Game theory optimization for distributed may increase the network performance. The purpose of this study is to survey the various operations that can be carried out using distributive and adaptive MAC protocol. Hill climbing distributed MAC does not need a central coordination system and location-based transmission with neighbor awareness reduces transmission power.
Design/methodology/approach
Distributed MAC in wireless networks is used to address the challenges like network lifetime, reduced energy consumption and for improving delay performance. In this paper, a survey is made on various cooperative communications in MAC protocols, optimization techniques used to improve MAC performance in various applications and mathematical approaches involved in game theory optimization for MAC protocol.
Findings
Spatial reuse of channel improved by 3%–29%, and multichannel improves throughput by 8% using distributed MAC protocol. Nash equilibrium is found to perform well, which focuses on energy utility in the network by individual players. Fuzzy logic improves channel selection by 17% and secondary users’ involvement by 8%. Cross-layer approach in MAC layer will address interference and jamming problems. Hybrid distributed MAC can be used for simultaneous voice, data transmissions in WSN and IoT applications. Cross-layer and cooperative communication give energy savings of 27% and reduces hop distance by 4.7%. Choosing the correct objective function in Nash equilibrium for game theory will address fairness index and resource allocation to the nodes.
Research limitations/implications
Other optimization techniques can be applied for WSN to analyze the performance.
Practical implications
Game theory optimization for distributed may increase the network performance. Optimal cuckoo search improves throughput by 90% and reduces delay by 91%. Stochastic approaches detect 80% attacks even in 90% malicious nodes.
Social implications
Channel allocations in centralized or static manner must be based on traffic demands whether dynamic traffic or fluctuated traffic. Usage of multimedia devices also increased which in turn increased the demand for high throughput. Cochannel interference keep on changing or mitigations occur which can be handled by proper resource allocations. Network survival is by efficient usage of valid patis in the network by avoiding transmission failures and time slots’ effective usage.
Originality/value
Literature survey is carried out to find the methods which give better performance.
Details
Keywords
Bibhas Chandra Giri and Sushil Kumar Dey
The purpose of this study is to investigate the impact of greening and promotional effort dependent stochastic market demand on the remanufacturer's and the collector's profits…
Abstract
Purpose
The purpose of this study is to investigate the impact of greening and promotional effort dependent stochastic market demand on the remanufacturer's and the collector's profits when the quality of used products for remanufacturing is uncertain in a reverse supply chain.
Design/methodology/approach
The proposed model is developed to obtain optimal profits for the remanufacturer, the collector and the whole supply chain. Both the centralized and decentralized scenarios are considered. To motivate the collector through profit enhancement, the remanufacturer designs a cost-sharing contract. Through numerical examples and sensitivity analysis, the consequences of greenness and promotional effort on optimal profits are investigated.
Findings
The results show that the remanufacturer gets benefited from greening and promotional effort enhancement. However, a higher value of minimum acceptable quality level decreases the profits of the manufacturer and the collector. A cost-sharing contract coordinates the supply chain and improves the remanufacturer's and the collector's profits. Besides green innovation, remanufacturing mitigates the harmful effects of waste in the environment.
Originality/value
Two different viewpoints of remanufacturing are considered here – environmental sustainability and economic sustainability. This paper considers a reverse supply chain with a remanufacturer who remanufactures the used products collected by the collector. The quality of used products is uncertain, and customer demand is stochastic, green and promotional effort sensitive. These two types of uncertainty with green and promotional effort sensitive customer demand differs the current paper from the existing literature.
Details
Keywords
V. Chowdary Boppana and Fahraz Ali
This paper presents an experimental investigation in establishing the relationship between FDM process parameters and tensile strength of polycarbonate (PC) samples using the…
Abstract
Purpose
This paper presents an experimental investigation in establishing the relationship between FDM process parameters and tensile strength of polycarbonate (PC) samples using the I-Optimal design.
Design/methodology/approach
I-optimal design methodology is used to plan the experiments by means of Minitab-17.1 software. Samples are manufactured using Stratsys FDM 400mc and tested as per ISO standards. Additionally, an artificial neural network model was developed and compared to the regression model in order to select an appropriate model for optimisation. Finally, the genetic algorithm (GA) solver is executed for improvement of tensile strength of FDM built PC components.
Findings
This study demonstrates that the selected process parameters (raster angle, raster to raster air gap, build orientation about Y axis and the number of contours) had significant effect on tensile strength with raster angle being the most influential factor. Increasing the build orientation about Y axis produced specimens with compact structures that resulted in improved fracture resistance.
Research limitations/implications
The fitted regression model has a p-value less than 0.05 which suggests that the model terms significantly represent the tensile strength of PC samples. Further, from the normal probability plot it was found that the residuals follow a straight line, thus the developed model provides adequate predictions. Furthermore, from the validation runs, a close agreement between the predicted and actual values was seen along the reference line which further supports satisfactory model predictions.
Practical implications
This study successfully investigated the effects of the selected process parameters - raster angle, raster to raster air gap, build orientation about Y axis and the number of contours - on tensile strength of PC samples utilising the I-optimal design and ANOVA. In addition, for prediction of the part strength, regression and ANN models were developed. The selected ANN model was optimised using the GA-solver for determination of optimal parameter settings.
Originality/value
The proposed ANN-GA approach is more appropriate to establish the non-linear relationship between the selected process parameters and tensile strength. Further, the proposed ANN-GA methodology can assist in manufacture of various industrial products with Nylon, polyethylene terephthalate glycol (PETG) and PET as new 3DP materials.
Details
Keywords
Innocent Chigozie Osuizugbo, Fidelis Okechukwu Ezeokoli, Kevin Chuks Okolie and Aduragbemi Deborah Olojo
The application of good buildability practices is vital for improving the performance of projects and businesses in the construction sector. Despite the plethora of research into…
Abstract
Purpose
The application of good buildability practices is vital for improving the performance of projects and businesses in the construction sector. Despite the plethora of research into buildability in construction in the previous years, there is little information concerning how buildability practice can be successfully implemented. This paper aims to develop a conceptual framework that explains how buildability practice can be implemented successfully in the construction industry.
Design/methodology/approach
The paper uses an integrative literature review method to synthesise literature from different domains to describe various themes by which buildability assessment can be successfully implemented in the construction industry.
Findings
The findings of the review of literature conceptualised a buildability implementation framework at four principal themes: buildability attributes for improving the practice of construction management, factors supporting the implementation of buildability assessment, measures for improving the buildability of building designs and factors impeding the implementation of buildability assessment.
Originality/value
The outcome of this study contributes to knowledge in three different ways. First, the framework emerging from this study provides guidance to stakeholders on strategies for the successful implementation of buildability. Second, the information gathered in this study is useful for the development of buildability assessment tool. Finally, the framework has a potential of improving the practice of embedding buildability into designs. The detailed descriptions of the relevant variables at each principal theme advance the understanding of buildability in the construction industry and are fundamental to developing buildability assessment tools for the industry.
Details
Keywords
Innocent Chigozie Osuizugbo, Kevin Chuks Okolie, Olalekan Shamsideen Oshodi and Opeyemi Olanrewaju Oyeyipo
Construction management researchers have acknowledged that the use of buildability could improve outcomes of project. Efficient use of resources required for the procurement of…
Abstract
Purpose
Construction management researchers have acknowledged that the use of buildability could improve outcomes of project. Efficient use of resources required for the procurement of construction projects is important for the economy. This study aims to aggregate the current knowledge on buildability within the construction management domain into an understandable whole using the systematic review approach.
Design/methodology/approach
An interpretivist epistemological approach was used as a lens for the systematic review of published research on buildability. The selected articles cover the time period between 1987 and 2020. The articles published in 2021 and 2022 were excluded to ensure that the scope of the current study is distinct and clear. In this research, qualitative content analysis was used to scrutinise the selected journal papers.
Findings
Based on the analysis of literature, the trends and gaps in the current knowledge on the topic of interest were identified. It was found that stakeholder’s knowledge and commitment play a huge role in the extent of adoption of buildability as a practice in the construction sector. Also, the study confirms that the use of buildability is beneficial to the project and its stakeholders.
Originality/value
The study maps the current state of knowledge on buildability and provides information on the gaps that could be explored in the future by researchers.
Details
Keywords
When the probability of each model is known, a natural idea is to select the most probable model. However, in many practical situations, the exact values of these probabilities…
Abstract
Purpose
When the probability of each model is known, a natural idea is to select the most probable model. However, in many practical situations, the exact values of these probabilities are not known; only the intervals that contain these values are known. In such situations, a natural idea is to select some probabilities from these intervals and to select a model with the largest selected probabilities. The purpose of this study is to decide how to most adequately select these probabilities.
Design/methodology/approach
It is desirable to have a probability-selection method that preserves independence. If, according to the probability intervals, the two events were independent, then the selection of probabilities within the intervals should preserve this independence.
Findings
The paper describes all techniques for decision making under interval uncertainty about probabilities that are consistent with independence. It is proved that these techniques form a 1-parametric family, a family that has already been successfully used in such decision problems.
Originality/value
This study provides a theoretical explanation of an empirically successful technique for decision-making under interval uncertainty about probabilities. This explanation is based on the natural idea that the method for selecting probabilities from the corresponding intervals should preserve independence.
Details