Search results
1 – 10 of over 51000Yingbao He, Jianhui Liu, Feilong Hua, He Zhao and Jie Wang
Under multiaxial random loading, the material stress–strain response is not periodic, which makes it difficult to determine the direction of the critical plane on the material…
Abstract
Purpose
Under multiaxial random loading, the material stress–strain response is not periodic, which makes it difficult to determine the direction of the critical plane on the material. Meanwhile, existing methods of constant loading cannot be directly applied to multiaxial random loading; this problem can be solved when an equivalent stress transformation method is used.
Design/methodology/approach
First, the Liu-Mahadevan critical plane is introduced into multiaxial random fatigue, which is enabled to determine the material's critical plane position under random loading. Then, an equivalent stress transformation method is proposed which can convert random load to constant load. Meanwhile, the ratio of mean stress to yield strength is defined as the new mean stress influence factor, and a new non-proportional additional strengthening factor is proposed by considering the effect of phase differences.
Findings
The proposed model is validated using multiaxial random fatigue test data of TC4 titanium alloy specimens and the results of the proposed model are compared with that based on Miner's rule and BSW model, showing that the proposed method is more accurate.
Originality/value
In this work, a new multiaxial random fatigue life prediction model is proposed based on equivalent stress transformation method, which considers the mean stress effect and the additional strengthening effect. Results show that the predicted fatigue lives given by the proposed model are in well accordance with the tested data.
Details
Keywords
Hui Chen and Donghai Liu
The purpose of this study is to develop a stochastic finite element method (FEM) to solve the calculation precision deficiency caused by spatial variability of dam compaction…
Abstract
Purpose
The purpose of this study is to develop a stochastic finite element method (FEM) to solve the calculation precision deficiency caused by spatial variability of dam compaction quality.
Design/methodology/approach
The Choleski decomposition method was applied to generate constraint random field of porosity. Large-scale laboratory triaxial tests were conducted to determine the quantitative relationship between the dam compaction quality and Duncan–Chang constitutive model parameters. Based on this developed relationship, the constraint random fields of the mechanical parameters were generated. The stochastic FEM could be conducted.
Findings
When the fully random field was simulated without the restriction effect of experimental data on test pits, the spatial variabilities of both displacement and stress results were all overestimated; however, when the stochastic FEM was performed disregarding the correlation between mechanical parameters, the variabilities of vertical displacement and stress results were underestimated and variation pattern for horizontal displacement also changed. In addition, the method could produce results that are closer to the actual situation.
Practical implications
Although only concrete-faced rockfill dam was tested in the numerical examples, the proposed method is applicable for arbitrary types of rockfill dams.
Originality/value
The value of this study is that the proposed method allowed for the spatial variability of constitutive model parameters and that the applicability was confirmed by the actual project.
Details
Keywords
Cheng Liu, Yi Shi, Wenjing Xie and Xinzhong Bao
This paper aims to provide a complete analysis framework and prediction method for the construction of the patent securitization (PS) basic asset pool.
Abstract
Purpose
This paper aims to provide a complete analysis framework and prediction method for the construction of the patent securitization (PS) basic asset pool.
Design/methodology/approach
This paper proposes an integrated classification method based on genetic algorithm and random forest algorithm. First, comprehensively consider the patent value evaluation model and SME credit evaluation model, determine 17 indicators to measure the patent value and SME credit; Secondly, establish the classification label of high-quality basic assets; Then, genetic algorithm and random forest model are used to predict and screen high-quality basic assets; Finally, the performance of the model is evaluated.
Findings
The machine learning model proposed in this study is mainly used to solve the screening problem of high-quality patents that constitute the underlying asset pool of PS. The empirical research shows that the integrated classification method based on genetic algorithm and random forest has good performance and prediction accuracy, and is superior to the single method that constitutes it.
Originality/value
The main contributions of the article are twofold: firstly, the machine learning model proposed in this article determines the standards for high-quality basic assets; Secondly, this article addresses the screening issue of basic assets in PS.
Details
Keywords
Nguyen Thi Dinh, Nguyen Thi Uyen Nhi, Thanh Manh Le and Thanh The Van
The problem of image retrieval and image description exists in various fields. In this paper, a model of content-based image retrieval and image content extraction based on the…
Abstract
Purpose
The problem of image retrieval and image description exists in various fields. In this paper, a model of content-based image retrieval and image content extraction based on the KD-Tree structure was proposed.
Design/methodology/approach
A Random Forest structure was built to classify the objects on each image on the basis of the balanced multibranch KD-Tree structure. From that purpose, a KD-Tree structure was generated by the Random Forest to retrieve a set of similar images for an input image. A KD-Tree structure is applied to determine a relationship word at leaves to extract the relationship between objects on an input image. An input image content is described based on class names and relationships between objects.
Findings
A model of image retrieval and image content extraction was proposed based on the proposed theoretical basis; simultaneously, the experiment was built on multi-object image datasets including Microsoft COCO and Flickr with an average image retrieval precision of 0.9028 and 0.9163, respectively. The experimental results were compared with those of other works on the same image dataset to demonstrate the effectiveness of the proposed method.
Originality/value
A balanced multibranch KD-Tree structure was built to apply to relationship classification on the basis of the original KD-Tree structure. Then, KD-Tree Random Forest was built to improve the classifier performance and retrieve a set of similar images for an input image. Concurrently, the image content was described in the process of combining class names and relationships between objects.
Details
Keywords
Craig Ellis and Patrick Wilson
To develop an integrated approach to forecasting spot foreign exchange rates by incorporating some principles underlying long‐term dependence.
Abstract
Purpose
To develop an integrated approach to forecasting spot foreign exchange rates by incorporating some principles underlying long‐term dependence.
Design/methodology/approach
The paper utilises the random‐walk framework to develop a stochastic forecast model wherein the sign (positive or negative) and magnitude (strong or weak) of dependence can be separately controlled. The integrated model demonstrates superior forecast performance over a conventional random walk.
Findings
Using spot log prices and log price changes (returns) for the USD/AUD exchange rate, the initial outcomes of the study suggest that a priori knowledge of the underlying sign and magnitude of long‐term dependence yields out‐of‐sample forecasts superior to those of a random walk model.
Research limitations/implications
Independent assessment of the contribution to forecast accuracy of controlling for the sign of dependence between successive price changes only shows little additional improvement in out‐of‐sample forecast performance over the random walk null.
Practical implications
The findings of the study have important ramifications for managerial finance as they provide important insights on expected future currency returns with potential advantages in currency hedging and/or timing of international capital flows.
Originality/value
The contribution of this paper is to develop an original forecast model explicitly incorporating the conceptual and theoretical characteristics of long‐term dependent time series. By separating the key characteristics and modelling each individually, the contribution of each to forecast accuracy can be evaluated.
Details
Keywords
Joseph F. Hair Jr. and Luiz Paulo Fávero
This paper aims to discuss multilevel modeling for longitudinal data, clarifying the circumstances in which they can be used.
Abstract
Purpose
This paper aims to discuss multilevel modeling for longitudinal data, clarifying the circumstances in which they can be used.
Design/methodology/approach
The authors estimate three-level models with repeated measures, offering conditions for their correct interpretation.
Findings
From the concepts and techniques presented, the authors can propose models, in which it is possible to identify the fixed and random effects on the dependent variable, understand the variance decomposition of multilevel random effects, test alternative covariance structures to account for heteroskedasticity and calculate and interpret the intraclass correlations of each analysis level.
Originality/value
Understanding how nested data structures and data with repeated measures work enables researchers and managers to define several types of constructs from which multilevel models can be used.
Details
Keywords
Peter Kelle and Pam Anders Miller
The transition from a traditional purchasing system to a JIT purchasing system can be a slow process or even unattainable, because of unreliable suppliers. The purchaser tries to…
Abstract
The transition from a traditional purchasing system to a JIT purchasing system can be a slow process or even unattainable, because of unreliable suppliers. The purchaser tries to co‐operate with the vendor, with the goal of receiving smaller, more frequent deliveries, on time, with the quality and quantity required. Often the vendor is ready to co‐operate, but is unable to fulfil these requirements. Provides simple models and methods to aid purchasers in this transition state. Gives simple approximate formulas for the minimum safety stock necessary to ensure the required service level of supply. Considers the case of random delays in shipments, random yield and uncertain demand, which are typical characteristics during the transition period. This safety stock depends on the order quantity and the number of shipments. Provides a simple method to find the order quantity, the number of shipments and safety stock, which minimize the joint total cost of the vendor and purchaser and ensure the required level of supply. Analyzes the savings provided by this method and the sensitivity of the models, in detail.
Details
Keywords
Xue Deng, Xiaolei He and Cuirong Huang
This paper proposes a fuzzy random multi-objective portfolio model with different entropy measures and designs a hybrid algorithm to solve the proposed model.
Abstract
Purpose
This paper proposes a fuzzy random multi-objective portfolio model with different entropy measures and designs a hybrid algorithm to solve the proposed model.
Design/methodology/approach
Because random uncertainty and fuzzy uncertainty are often combined in a real-world setting, the security returns are considered as fuzzy random numbers. In the model, the authors also consider the effects of different entropy measures, including Yager's entropy, Shannon's entropy and min-max entropy. During the process of solving the model, the authors use a ranking method to convert the expected return into a crisp number. To find the optimal solution efficiently, a fuzzy programming technique based on artificial bee colony (ABC) algorithm is also proposed.
Findings
(1) The return of optimal portfolio increases while the level of investor risk aversion increases. (2) The difference of the investment weights of the optimal portfolio obtained with Yager's entropy are much smaller than that of the min–max entropy. (3) The performance of the ABC algorithm on solving the proposed model is superior than other intelligent algorithms such as the genetic algorithm, differential evolution and particle swarm optimization.
Originality/value
To the best of the authors' knowledge, no effect has been made to consider a fuzzy random portfolio model with different entropy measures. Thus, the novelty of the research is constructing a fuzzy random multi-objective portfolio model with different entropy measures and designing a hybrid fuzzy programming-ABC algorithm to solve the proposed model.
Details
Keywords
Kiyoshi Kobayashi and Kiyoyuki Kaito
This study aims to focus on asset management of large‐scale information systems supporting infrastructures and especially seeks to address a methodology of their statistical…
Abstract
Purpose
This study aims to focus on asset management of large‐scale information systems supporting infrastructures and especially seeks to address a methodology of their statistical deterioration prediction based on their historical inspection data. Information systems are composed of many devices. Deterioration process i.e. wear‐out failure generation process of those devices is formulated by a Weibull hazard model. Furthermore, in order to consider the heterogeneity of the hazard rate of each device, the random proportional Weibull hazard model, which expresses the heterogeneity of the hazard rate as random variables, is to be proposed.
Design/methodology/approach
Large‐scale information systems comprise many components, and different types of components might have different hazard rates. Therefore, when analyzing faults of information systems that comprise various types of devices and components, it is important to consider the heterogeneity of the hazard rates that exist between the different types of components. In this study, with this in consideration, the random proportional Weibull hazard model, whose heterogeneity of hazard rates is subject to a gamma distribution, is formulated and a methodology is proposed which estimates the failure rate of various components comprising an information system.
Findings
Through a case study using a traffic control system for expressways, the validity of the proposed model is empirically verified. Concretely, as for HDD, the service life at which the survival probability is 50 percent is estimated as 158 months. However, even for the same HDD, use environment differs according to usage. Actually, among the three different usages (PC, server, others), failures happen earliest in the case of PCs, which have the highest heterogeneity parameter and a survival probability of 50 percent after 135 months of usage. On the other hand, as for others, its survival probability is 50 percent at 303 months.
Originality/value
To operationally express the heterogeneity of failure rates, the Weibull hazard model is employed as a base, and a random proportional Weibull hazard model expressing the proportional heterogeneity of hazard rates with a standard gamma distribution is formulated. By estimating the parameter of the standard proportional Weibull hazard function and the parameter of the probability distribution that expresses the heterogeneity of the proportionality constant between the types, the random proportional Weibull hazard model can easily express the heterogeneity of the hazard rates between types and components.
Details
Keywords
Maneerat Kanrak, Hong Oanh Nguyen and Yuquan Du
This paper presents a critical review of the economic network analysis methods and their applications to maritime transport. A network can be presented in terms of its structure…
Abstract
This paper presents a critical review of the economic network analysis methods and their applications to maritime transport. A network can be presented in terms of its structure, topology, characteristics as well as the connectivity with different measures such as density, degree distribution, centrality (degree, betweenness, closeness, eigenvector and strength), clustering coefficient, average shortest path length and assortative. Various models such as the random graph model, block model, and ERGM can be used to analyse and explore the formation of a network and interaction between nodes. The review of the existing theories and models has found that, while these models are rather computationally intensive, they are based on some rather restrictive assumption on network formation and relationship between ports in the network at the local and global levels that require further investigation. Based on the review, a conceptual framework for maritime transport network research is developed, and the applications for future research are also discussed.
Details