Search results
1 – 10 of 78Zhijie Wen, Qikun Zhao and Lining Tong
The purpose of this paper is to present a novel method for minor fabric defects detection.
Abstract
Purpose
The purpose of this paper is to present a novel method for minor fabric defects detection.
Design/methodology/approach
This paper proposes a PETM-CNN algorithm. PETM-CNN is designed based on self-similar estimation algorithm and Convolutional Neural Network. The PE (Patches Extractor) algorithm extracts patches that are possible to be defective patches to preprocess the fabric image. Then a TM-CNN (Triplet Metric CNN) method is designed to predict labels of the patches and the final label of the image. The TM-CNN can perform better than normal CNN.
Findings
This algorithm is superior to other algorithms on the data set of fabric images with minor defects. The proposed method achieves accurate classification of fabric images whether it has minor defects or not. The experimental results show that the approach is effective.
Originality/value
Traditional fabric defects detection is not effective as minor defects detection, so this paper develops a method of minor fabric images classification based on self-similar estimation and CNN. This paper offers the first investigation of minor fabric defects.
Details
Keywords
Ngai Hang Chan and Wilfredo Palma
Since the seminal works by Granger and Joyeux (1980) and Hosking (1981), estimations of long-memory time series models have been receiving considerable attention and a number of…
Abstract
Since the seminal works by Granger and Joyeux (1980) and Hosking (1981), estimations of long-memory time series models have been receiving considerable attention and a number of parameter estimation procedures have been proposed. This paper gives an overview of this plethora of methodologies with special focus on likelihood-based techniques. Broadly speaking, likelihood-based techniques can be classified into the following categories: the exact maximum likelihood (ML) estimation (Sowell, 1992; Dahlhaus, 1989), ML estimates based on autoregressive approximations (Granger & Joyeux, 1980; Li & McLeod, 1986), Whittle estimates (Fox & Taqqu, 1986; Giraitis & Surgailis, 1990), Whittle estimates with autoregressive truncation (Beran, 1994a), approximate estimates based on the Durbin–Levinson algorithm (Haslett & Raftery, 1989), state-space-based maximum likelihood estimates for ARFIMA models (Chan & Palma, 1998), and estimation of stochastic volatility models (Ghysels, Harvey, & Renault, 1996; Breidt, Crato, & de Lima, 1998; Chan & Petris, 2000) among others. Given the diversified applications of these techniques in different areas, this review aims at providing a succinct survey of these methodologies as well as an overview of important related problems such as the ML estimation with missing data (Palma & Chan, 1997), influence of subsets of observations on estimates and the estimation of seasonal long-memory models (Palma & Chan, 2005). Performances and asymptotic properties of these techniques are compared and examined. Inter-connections and finite sample performances among these procedures are studied. Finally, applications to financial time series of these methodologies are discussed.
Josephine Dufitinema and Seppo Pynnönen
The purpose of this paper is to examine the evidence of long-range dependence behaviour in both house price returns and volatility for fifteen main regions in Finland over the…
Abstract
Purpose
The purpose of this paper is to examine the evidence of long-range dependence behaviour in both house price returns and volatility for fifteen main regions in Finland over the period of 1988:Q1 to 2018:Q4. These regions are divided geographically into 45 cities and sub-areas according to their postcode numbers. The studied type of dwellings is apartments (block of flats) divided into one-room, two-rooms, and more than three rooms apartments types.
Design/methodology/approach
For each house price return series, both parametric and semiparametric long memory approaches are used to estimate the fractional differencing parameter d in an autoregressive fractional integrated moving average [ARFIMA (p, d, q)] process. Moreover, for cities and sub-areas with significant clustering effects (autoregressive conditional heteroscedasticity [ARCH] effects), the semiparametric long memory method is used to analyse the degree of persistence in the volatility by estimating the fractional differencing parameter d in both squared and absolute price returns.
Findings
A higher degree of predictability was found in all three apartments types price returns with the estimates of the long memory parameter constrained in the stationary and invertible interval, implying that the returns of the studied types of dwellings are long-term dependent. This high level of persistence in the house price indices differs from other assets, such as stocks and commodities. Furthermore, the evidence of long-range dependence was discovered in the house price volatility with more than half of the studied samples exhibiting long memory behaviour.
Research limitations/implications
Investigating the long memory behaviour in both returns and volatility of the house prices is crucial for investment, risk and portfolio management. One reason is that the evidence of long-range dependence in the housing market returns suggests a high degree of predictability of the asset. The other reason is that the presence of long memory in the housing market volatility aids in the development of appropriate time series volatility forecasting models in this market. The study outcomes will be used in modelling and forecasting the volatility dynamics of the studied types of dwellings. The quality of the data limits the analysis and the results of the study.
Originality/value
To the best of the authors’ knowledge, this is the first research that assesses the long memory behaviour in the Finnish housing market. Also, it is the first study that evaluates the volatility of the Finnish housing market using data on both municipal and geographical level.
Details
Keywords
Carlos Barros, Luis A. Gil-Alana and Peter Wanke
This paper aims to investigate the production of sugar cane ethanol in Brazil for the time period 1983-2016, separating the data by geographical location.
Abstract
Purpose
This paper aims to investigate the production of sugar cane ethanol in Brazil for the time period 1983-2016, separating the data by geographical location.
Design/methodology/approach
For this purpose, the authors use techniques based on the concept of fractional integration.
Findings
The authors show that the data corresponding to the total production is highly persistent, with an integration order smaller than 1 but close to it. In fact, the unit root hypothesis cannot be rejected implying that shocks have a permanent nature, and thus requiring policy measures to recover the level from exogenous shocks. Separating the data into two sub-regions, namely, North–Northeast and Central–South, higher levels of persistence are detected in the latter, while the former presents some evidence of mean reverting behavior, implying that shocks will disappear by themselves in the long run in the former regions. These results are obtained from all the different methods used.
Originality/value
The originality is based on the time series techniques used in the paper that departs from the classical methods based on unit roots and integer degrees of differentiation.
Details
Keywords
SERGIO M. FOCARDI and FRANK J. FABOZZI
Fat‐tailed distributions have been found in many financial and economic variables ranging from forecasting returns on financial assets to modeling recovery distributions in…
Abstract
Fat‐tailed distributions have been found in many financial and economic variables ranging from forecasting returns on financial assets to modeling recovery distributions in bankruptcies. They have also been found in numerous insurance applications such as catastrophic insurance claims and in value‐at‐risk measures employed by risk managers. Financial applications include:
The purpose of this paper was to study laminar fluid flow and convective heat transfer in a conical gap at small conicity angles up to 4° for the case of disk rotation with a…
Abstract
Purpose
The purpose of this paper was to study laminar fluid flow and convective heat transfer in a conical gap at small conicity angles up to 4° for the case of disk rotation with a fixed cone.
Design/methodology/approach
In this paper, the improved asymptotic expansion method developed by the author was applied to the self-similar Navier–Stokes equations. The characteristic Reynolds number ranged from 0.001 to 2.0, and the Prandtl numbers ranged from 0.71 to 10.
Findings
Compared to previous approaches, the improved asymptotic expansion method has an accuracy like the self-similar solution in a significantly wider range of Reynolds and Prandtl numbers. Including radial thermal conductivity in the energy equation at small conicity angle leads to insignificant deviations of the Nusselt number (maximum 1.23%).
Practical implications
This problem has applications in rheometry to experimentally determine viscosity of liquids, as well as in bioengineering and medicine, where cone-and-disk devices serve as an incubator for nurturing endothelial cells.
Social implications
The study can help design more effective devices to nurture endothelial cells, which regulate exchanges between the bloodstream and the surrounding tissues.
Originality/value
To the best of the authors’ knowledge, for the first time, novel approximate analytical solutions were obtained for the radial, tangential and axial velocity components, flow swirl angle on the disk, tangential stresses on both surfaces, as well as static pressure, which varies not only with the Reynolds number but also across the gap. These solutions are in excellent agreement with the self-similar solution.
Details
Keywords
The purpose of this paper is to introduce the performance of traditional collision resolution algorithm (CRA) under self‐similar traffic and present a prediction‐based CRA for…
Abstract
Purpose
The purpose of this paper is to introduce the performance of traditional collision resolution algorithm (CRA) under self‐similar traffic and present a prediction‐based CRA for wireless media access. With experimentations, the method is there after evaluated.
Design/methodology/approach
The traditional traffic models are mostly based on “Poisson” model or “Bernoulli” process, but recent decade traffic measurements found the coexistence of both long‐ and short‐range dependence in network traffic. On the other hand, CRA is an effective strategy to improve the performance of multiple access protocol and it achieves the highest capacity among all known multiple access protocols under the Poisson traffic model. In this paper, a CRA model is built on OPNET to study the effects of different traffic traces such as the fractional autoregressive integrated moving average process with non‐Gaussian white driving sequence and the real traffic data that are captured at a well‐attended ACM Conference. The performance is compared of traditional CRA based on the self‐similar traffic model and Poisson model and a novel CRA based on time – series prediction theory under self‐similar traffic models is designed.
Findings
The traditional Poisson traffic model gets the best performance under traditional CRA, while the self‐similar traffic performance under traditional CRA is too poor to be applied in an actual network environment. For example, the Poisson traffic model obtains the biggest throughput, the smallest delay and smallest collision resolution numbers under traditional CRA. This paper demonstrates the first‐come first‐serve (FCFS) must be improved under self‐similar traffic models. The novel collision resolution strategy‐based prediction can provide better performance under self‐similar traffic. The parameters of performance, such as throughput, delay and collision resolution numbers, are better than the traditional CRA.
Originality/value
This paper presents a prediction‐based CRA for self‐similar traffic which is the combination of the FCFS and the prediction theory and also provides a new method to resolve packets colliding.
Details
Keywords
The Hurst exponent has been very important in telling the difference between fractal signals and explaining their significance. For estimators of the Hurst exponent, accuracy and…
Abstract
Purpose
The Hurst exponent has been very important in telling the difference between fractal signals and explaining their significance. For estimators of the Hurst exponent, accuracy and efficiency are two inevitable considerations. The main purpose of this study is to raise the execution efficiency of the existing estimators, especially the fast maximum likelihood estimator (MLE), which has optimal accuracy.
Design/methodology/approach
A two-stage procedure combining a quicker method and a more accurate one to estimate the Hurst exponent from a large to small range will be developed. For the best possible accuracy, the data-induction method is currently ideal for the first-stage estimator and the fast MLE is the best candidate for the second-stage estimator.
Findings
For signals modeled as discrete-time fractional Gaussian noise, the proposed two-stage estimator can save up to 41.18 per cent the computational time of the fast MLE while remaining almost as accurate as the fast MLE, and even for signals modeled as discrete-time fractional Brownian motion, it can also save about 35.29 per cent except for smaller data sizes.
Originality/value
The proposed two-stage estimation procedure is a novel idea. It can be expected that other fields of parameter estimation can apply the concept of the two-stage estimation procedure to raise computational performance while remaining almost as accurate as the more accurate of two estimators.
Details
Keywords
Guglielmo Maria Caporale, Luis Alberiko Gil-Alana and Alex Plastun
The purpose of this paper is to provide some new empirical evidence on the weekend effect (one of the best known anomalies in financial markets) in Ukrainian futures prices. The…
Abstract
Purpose
The purpose of this paper is to provide some new empirical evidence on the weekend effect (one of the best known anomalies in financial markets) in Ukrainian futures prices. The analysis uses various statistical techniques.
Design/methodology/approach
The analysis uses various statistical techniques (average analysis, Student’s t-test, dummy variables, and fractional integration) to test for the presence of this anomaly, and then a trading simulation approach to establish whether it can be exploited to make extra profits.
Findings
The statistical evidence points to abnormal positive returns on Fridays, and a trading strategy based on this anomaly is shown to generate annual profits of up to 25 per cent. The implication is that the Ukrainian stock market is inefficient.
Originality/value
This paper provides some new empirical evidence on the weekend effect (one of the best known anomalies in financial markets) in Ukrainian futures prices. The analysis uses various statistical techniques (average analysis, Student’s t-test, dummy variables, and fractional integration) to test for the presence of this anomaly, and then a trading simulation approach to establish whether it can be exploited to make extra profits. The statistical evidence points to abnormal positive returns on Fridays, and a trading strategy based on this anomaly is shown to generate annual profits of up to 25 per cent. The implication is that the Ukrainian stock market is inefficient.
Details