Search results
1 – 10 of 845The purpose of this paper is to propose a novel nonlocal fractal calculus scheme dedicated to the analysis of fractal electrical circuit, namely, the generalized nonlocal fractal…
Abstract
Purpose
The purpose of this paper is to propose a novel nonlocal fractal calculus scheme dedicated to the analysis of fractal electrical circuit, namely, the generalized nonlocal fractal calculus.
Design/methodology/approach
For being generalized, an arbitrary kernel function has been adopted. The condition on order has been derived so that it is not related to the γ-dimension of the fractal set. The fractal Laplace transforms of our operators have been derived.
Findings
Unlike the traditional power law kernel-based nonlocal fractal calculus operators, ours are generalized, consistent with the local fractal derivative and use higher degree of freedom. As intended, the proposed nonlocal fractal calculus is applicable to any kind of fractal electrical circuit. Thus, it has been found to be a more efficient tool for the fractal electrical circuit analysis than any previous fractal set dedicated calculus scheme.
Originality/value
A fractal calculus scheme that is more efficient for the fractal electrical circuit analysis than any previous ones has been proposed in this work.
Details
Keywords
To focus on grid generation which is an essential part of any analytical tool for effective discretization.
Abstract
Purpose
To focus on grid generation which is an essential part of any analytical tool for effective discretization.
Design/methodology/approach
This paper explores the application of the possibility of unstructured triangular grid generation that deals with derivationally continuous, smooth, and fair triangular elements using piecewise polynomial parametric surfaces which interpolate prescribed R3 scattered data using spaces of parametric splines defined on R2 triangulations in the case of surfaces in engineering sciences. The method is based upon minimizing a physics‐based certain natural energy expression over the parametric surface. The geometry is defined as a set of stitched triangles prior to the grid generation. As for derivational continuities between the two triangular patches C0 and C1 continuity or both, as per the requirements, has been imposed. With the addition of a penalty term, C2 (approximate) continuity can also be achieved. Since, in this work physics‐based approach has been used, the grid is analyzed using intersection curves with three‐dimensional planes, and intrinsic geometric properties (i.e. directional derivatives), for derivational continuity and smoothness.
Findings
The triangular grid generation that deals with derivationally continuous, smooth, and fair triangular elements has been implemented in this paper for surfaces in engineering sciences.
Practical implications
This paper deals with the important problem of grid generation which is an essential part of any analytical tool for effective discretization. And, the examples to demonstrate the theoretical model of this paper have been chosen from different branches of engineering sciences. Hence, the results of this paper are of practical importance for grid generation in engineering sciences.
Originality/value
The paper is theoretical with worked examples chosen from engineering sciences.
Details
Keywords
Eric Ghysels and J. Isaac Miller
We analyze the sizes of standard cointegration tests applied to data subject to linear interpolation, discovering evidence of substantial size distortions induced by the…
Abstract
We analyze the sizes of standard cointegration tests applied to data subject to linear interpolation, discovering evidence of substantial size distortions induced by the interpolation. We propose modifications to these tests to effectively eliminate size distortions from such tests conducted on data interpolated from end-of-period sampled low-frequency series. Our results generally do not support linear interpolation when alternatives such as aggregation or mixed-frequency-modified tests are possible.
Details
Keywords
– The purpose of this paper is to extend the h-index framework to the case that articles are counted fractionally.
Abstract
Purpose
The purpose of this paper is to extend the h-index framework to the case that articles are counted fractionally.
Design/methodology/approach
Three restrictions related to the standard h-index are explained: as the standard h-index is a natural number it is a rather coarse indicator; if a scientist has published a relatively small number of publications then the h-index is completely determined by the number of publications; the standard h-index cannot be applied if publications are counted fractionally, or when magnitude values smaller than one occur.
Findings
We recall solutions we proposed in earlier publications regarding the first two problems (the use of the interpolated h-index and of the pseudo h-index) and add a new proposal to solve the third problem. The relation between the recently introduced window/field-normalized h-type index (hwf-index) and the interpolated h-index is described. A real-world example proves the feasibility of this proposal.
Research limitations/implications
Colleagues have shown that the h-index and its variations have fatal flaws and hence should never be used. Yet, not everyone agrees with this opinion.
Originality/value
Assuming that the h-index still has some value, this paper introduces a refinement of the interpolated h-index, called the generalized interpolated h-index. In this way the h-index framework is extended to incorporate, for instance, the case that fractional counting for publications and citations is applied.
Details
Keywords
Sanat Agrawal, Deon J. de Beer and Yashwant Kumar Modi
This paper aims to convert surface data directly to a three-dimensional (3D) stereolithography (STL) part. The Geographic Information Systems (GIS) data available for a terrain…
Abstract
Purpose
This paper aims to convert surface data directly to a three-dimensional (3D) stereolithography (STL) part. The Geographic Information Systems (GIS) data available for a terrain are the data of its surface. It doesn’t have information for a solid model. The data need to be converted into a three-dimensional (3D) solid model for making physical models by additive manufacturing (AM).
Design/methodology/approach
A methodology has been developed to make the wall and base of the part and tessellates the part with triangles. A program has been written which gives output of the part in STL file format. The elevation data are interpolated and any singularity present is removed. Extensive search techniques are used.
Findings
AM technologies are increasingly being used for terrain modeling. However, there is not enough work done to convert the surface data into 3D solid model. The present work aids in this area.
Practical implications
The methodology removes data loss associated with intermediate file formats. Terrain models can be created in less time and less cost. Intricate geometries of terrain can be created with ease and great accuracy.
Social implications
The terrain models can be used for GIS education, educating the community for catchment management, conservation management, etc.
Originality/value
The work allows direct and automated conversion of GIS surface data into a 3D STL part. It removes intermediate steps and any data loss associated with intermediate file formats.
Details
Keywords
We develop new high‐order positive, monotone and convex interpolations, which are to be used in the multigrid context. This means that the value of the interpolant is calculated…
Abstract
We develop new high‐order positive, monotone and convex interpolations, which are to be used in the multigrid context. This means that the value of the interpolant is calculated only at the midpoints lying between the locations of the given values. As a consequence, these interpolants can be calculated very efficiently. They are then tested in a time‐dependent very large scale integration process simulation application.
Ronald Klimberg and Samuel Ratick
In a previous chapter (Klimberg, Ratick, & Smith, 2018), we introduced a novel approach in which cluster centroids were used as input data for the predictor variables of a…
Abstract
In a previous chapter (Klimberg, Ratick, & Smith, 2018), we introduced a novel approach in which cluster centroids were used as input data for the predictor variables of a multiple linear regression (MLR) used to forecast fleet maintenance costs. We applied this approach to a real data set and significantly improved the predictive accuracy of the MLR model. In this chapter, we develop a methodology for adjusting moving average forecasts of the future values of fleet service occurrences by interpolating those forecast values using their relative distances from cluster centroids. We illustrate and evaluate the efficacy of this approach with our previously used data set on fleet maintenance.
Details
Keywords
Lee Danisch, Kevin Englehart and Andrew Trivett
This paper describes SHAPE TAPE™, a thin array of fiber optic curvature sensors laminated on a ribbon substrate, arranged to sense bend and twist. The resulting signals are used…
Abstract
This paper describes SHAPE TAPE™, a thin array of fiber optic curvature sensors laminated on a ribbon substrate, arranged to sense bend and twist. The resulting signals are used to build a three dimensional computer model containing six degree of freedom position and orientation information for any location along the ribbon. The tape can be used to derive dynamic or static shape information from objects to which it is attached or scanned over. This is particularly useful where attachment is only partial, since shape tape “knows where it is” relative to a starting location. Measurements can be performed where cameras cannot see, without the use of magnetic fields. Applications include simulation, film animation, computer aided design, robotics, biomechanics, and crash testing.
Details
Keywords
Promio Charles F., Raja Samikkannu, Niranjan K. Sura and Shanwaz Mulla
Ground vibration testing (GVT) results can be used as system parameters for predicting flutter, which is essential for aeroelastic clearance. This paper aims to compute GVT-based…
Abstract
Purpose
Ground vibration testing (GVT) results can be used as system parameters for predicting flutter, which is essential for aeroelastic clearance. This paper aims to compute GVT-based flutter in time domain, using unsteady air loads by matrix polynomial approximations.
Design/methodology/approach
The experimental parameters, namely, frequencies and mode shapes are interpolated to build an equivalent finite element model. The unsteady aerodynamic forces extracted from MSC NASTRAN are approximated using matrix polynomial approximations. The system matrices are condensed to the required shaker location points to build an aeroelastic reduced order state space model in SIMULINK.
Findings
The computed aerodynamic forces are successfully reduced to few input locations (optimal) for flutter simulation on unknown structural system (where stiffness and mass are not known) through a case study. It is demonstrated that GVT data and the computed unsteady aerodynamic forces of a system are adequate to represent its aeroelastic behaviour.
Practical implications
Airforce of every nation continuously upgrades its fleet with advanced weapon systems (stores), which demands aeroelastic flutter clearance. As the original equipment manufacturers does not provide the design data (stiffness and mass) to its customers, a new methodology to build an aeroelastic system of unknown aircraft is devised.
Originality/value
A hybrid approach is proposed, involving GVT data to build an aeroelastic state space system, using rationally approximated air loads (matrix polynomial approximations) computed on a virtual FE model for ground flutter simulation.
Details
Keywords
Sharon Slade, Paul Prinsloo and Mohammad Khalil
The purpose of this paper is to explore and establish the contours of trust in learning analytics and to establish steps that institutions might take to address the “trust…
Abstract
Purpose
The purpose of this paper is to explore and establish the contours of trust in learning analytics and to establish steps that institutions might take to address the “trust deficit” in learning analytics.
Design/methodology/approach
“Trust” has always been part and parcel of learning analytics research and practice, but concerns around privacy, bias, the increasing reach of learning analytics, the “black box” of artificial intelligence and the commercialization of teaching and learning suggest that we should not take stakeholder trust for granted. While there have been attempts to explore and map students’ and staff perceptions of trust, there is no agreement on the contours of trust. Thirty-one experts in learning analytics research participated in a qualitative Delphi study.
Findings
This study achieved agreement on a working definition of trust in learning analytics, and on factors that impact on trusting data, trusting institutional understandings of student success and the design and implementation of learning analytics. In addition, it identifies those factors that might increase levels of trust in learning analytics for students, faculty and broader.
Research limitations/implications
The study is based on expert opinions as such there is a limitation of how much it is of a true consensus.
Originality/value
Trust cannot be assumed is taken for granted. This study is original because it establishes a number of concerns around the trustworthiness of learning analytics in respect of how data and student learning journeys are understood, and how institutions can address the “trust deficit” in learning analytics.
Details