Search results
1 – 10 of over 7000Tiago Oliveira, Wilber Vélez and Artur Portela
This paper is concerned with new formulations of local meshfree and finite element numerical methods, for the solution of two-dimensional problems in linear elasticity.
Abstract
Purpose
This paper is concerned with new formulations of local meshfree and finite element numerical methods, for the solution of two-dimensional problems in linear elasticity.
Design/methodology/approach
In the local domain, assigned to each node of a discretization, the work theorem establishes an energy relationship between a statically admissible stress field and an independent kinematically admissible strain field. This relationship, derived as a weighted residual weak form, is expressed as an integral local form. Based on the independence of the stress and strain fields, this local form of the work theorem is kinematically formulated with a simple rigid-body displacement to be applied by local meshfree and finite element numerical methods. The main feature of this paper is the use of a linearly integrated local form that implements a quite simple algorithm with no further integration required.
Findings
The reduced integration, performed by this linearly integrated formulation, plays a key role in the behavior of local numerical methods, since it implies a reduction of the nodal stiffness which, in turn, leads to an increase of the solution accuracy and, which is most important, presents no instabilities, unlike nodal integration methods without stabilization. As a consequence of using such a convenient linearly integrated local form, the derived meshfree and finite element numerical methods become fast and accurate, which is a feature of paramount importance, as far as computational efficiency of numerical methods is concerned. Three benchmark problems were analyzed with these techniques, in order to assess the accuracy and efficiency of the new integrated local formulations of meshfree and finite element numerical methods. The results obtained in this work are in perfect agreement with those of the available analytical solutions and, furthermore, outperform the computational efficiency of other methods. Thus, the accuracy and efficiency of the local numerical methods presented in this paper make this a very reliable and robust formulation.
Originality/value
Presentation of a new local mesh-free numerical method. The method, linearly integrated along the boundary of the local domain, implements an algorithm with no further integration required. The method is absolutely reliable, with remarkably-accurate results. The method is quite robust, with extremely-fast computations.
Details
Keywords
Most college students are required to take at least one mathematics course. Many of these students view mathematics as a dry and tedious subject, where the main task is to “plug…
Abstract
Most college students are required to take at least one mathematics course. Many of these students view mathematics as a dry and tedious subject, where the main task is to “plug and chug” using formulas. In contrast, mathematicians see mathematics as a creative process in which real joy comes from grappling with difficult problems and (hopefully) solving them. In this way, mathematics is like a fun puzzle. The challenge is to get students to view mathematics the same way that their teachers do. Inquiry-based learning (IBL) can help solve this problem. The Academy of Inquiry-Based Learning describes IBL as a pedagogical method that encourages students to conjecture, discover, solve, explore, collaborate, and communicate (What is IBL? (n.d.). Retrieved from http://www.inquirybasedlearning.org/?page=What_is_IBL). With IBL, teachers do not lay out all of the formulas and theorems as previous knowledge. Nor do they provide perfect, easily worked through examples and proofs for every new topic. Instead, IBL courses demonstrate the creative process that is mathematics. IBL makes class more enjoyable for both teachers and students, and can bring students closer to the real experiences of mathematicians.
Boris Mitavskiy, Jonathan Rowe and Chris Cannings
The purpose of this paper is to establish a version of a theorem that originated from population genetics and has been later adopted in evolutionary computation theory that will…
Abstract
Purpose
The purpose of this paper is to establish a version of a theorem that originated from population genetics and has been later adopted in evolutionary computation theory that will lead to novel Monte‐Carlo sampling algorithms that provably increase the AI potential.
Design/methodology/approach
In the current paper the authors set up a mathematical framework, state and prove a version of a Geiringer‐like theorem that is very well‐suited for the development of Mote‐Carlo sampling algorithms to cope with randomness and incomplete information to make decisions.
Findings
This work establishes an important theoretical link between classical population genetics, evolutionary computation theory and model free reinforcement learning methodology. Not only may the theory explain the success of the currently existing Monte‐Carlo tree sampling methodology, but it also leads to the development of novel Monte‐Carlo sampling techniques guided by rigorous mathematical foundation.
Practical implications
The theoretical foundations established in the current work provide guidance for the design of powerful Monte‐Carlo sampling algorithms in model free reinforcement learning, to tackle numerous problems in computational intelligence.
Originality/value
Establishing a Geiringer‐like theorem with non‐homologous recombination was a long‐standing open problem in evolutionary computation theory. Apart from overcoming this challenge, in a mathematically elegant fashion and establishing a rather general and powerful version of the theorem, this work leads directly to the development of novel provably powerful algorithms for decision making in the environment involving randomness, hidden or incomplete information.
Details
Keywords
Iqbal M. Batiha, Adel Ouannas, Ramzi Albadarneh, Abeer A. Al-Nana and Shaher Momani
This paper aims to investigate the existence and uniqueness of solution for generalized Sturm–Liouville and Langevin equations formulated using Caputo–Hadamard fractional…
Abstract
Purpose
This paper aims to investigate the existence and uniqueness of solution for generalized Sturm–Liouville and Langevin equations formulated using Caputo–Hadamard fractional derivative operator in accordance with three nonlocal Hadamard fractional integral boundary conditions. With regard to this nonlinear boundary value problem, three popular fixed point theorems, namely, Krasnoselskii’s theorem, Leray–Schauder’s theorem and Banach contraction principle, are employed to theoretically prove and guarantee three novel theorems. The main outcomes of this work are verified and confirmed via several numerical examples.
Design/methodology/approach
In order to accomplish our purpose, three fixed point theorems are applied to the problem under consideration according to some conditions that have been established to this end. These theorems are Krasnoselskii's theorem, Leray Schauder's theorem and Banach contraction principle.
Findings
In accordance to the applied fixed point theorems on our main problem, three corresponding theoretical results are stated, proved, and then verified via several numerical examples.
Originality/value
The existence and uniqueness of solution for generalized Sturm–Liouville and Langevin equations formulated using Caputo–Hadamard fractional derivative operator in accordance with three nonlocal Hadamard fractional integral boundary conditions are studied. To the best of the authors’ knowledge, this work is original and has not been published elsewhere.
Details
Keywords
Yi Lin and Dillon Forrest
This paper aims to look at the economic concepts of consumption preferences and merit goods and the well constructed examples: The Lazy Rotten Kids, The Nightlight Controversial…
Abstract
Purpose
This paper aims to look at the economic concepts of consumption preferences and merit goods and the well constructed examples: The Lazy Rotten Kids, The Nightlight Controversial, and The Prodigal Son, in the light of a recent systemic model, named yoyo model.
Design/methodology/approach
With the systemic yoyo model and its methodology used as the road‐map, the traditional calculus‐based methods are employed.
Findings
From the angle of whole systemic evolution, an astonishing theorem is established, named the Theorem of Never‐Perfect Value Systems. It states that, no matter how a value system is introduced and reinforced, the system will never be perfect. Also, it is shown that, when a tender loving parent exists, his selfish child would take advantage of the parent by putting as little effort into his work as possible.
Originality/value
With recent development of systems research as the foundation, two brand new insights into household economics were discovered.
Details
Keywords
After briefly reviewing the past history of Bayesian econometrics and Alan Greenspan's (2004) recent description of his use of Bayesian methods in managing policy-making risk…
Abstract
After briefly reviewing the past history of Bayesian econometrics and Alan Greenspan's (2004) recent description of his use of Bayesian methods in managing policy-making risk, some of the issues and needs that he mentions are discussed and linked to past and present Bayesian econometric research. Then a review of some recent Bayesian econometric research and needs is presented. Finally, some thoughts are presented that relate to the future of Bayesian econometrics.
Rainer Michaeli and Lothar Simon
This paper is intended to enable competitive intelligence practitioners using an important method for everyday work when confronted with conditional uncertainties: the Bayes'…
Abstract
Purpose
This paper is intended to enable competitive intelligence practitioners using an important method for everyday work when confronted with conditional uncertainties: the Bayes' theorem.
Design/methodology/approach
The paper shows how the mathematical concept of the Bayes theorem applies to competitive intelligence problems. The main approach is to illustrate the concepts by a near‐real world example. The paper also provides background for further reading, especially for psychological problems connected with Bayes' theorem.
Findings
The main finding is that conditional uncertainties represent a common problem in competitive intelligence. They should be computed explicitly rather than estimated intuitively. Otherwise, serious misinterpretations and complete project failures might follow.
Research limitations/implications
In psychological literature it is a known fact that conditional uncertainties sometimes cannot be handled correctly. Conditional uncertainties seem to be handled well when they are about human properties. This should be verified or falsified in the competitive intelligence context.
Practical implications
In general, the application of Bayes' theorem should be seen as one of the foundations of competitive intelligence education. Especially, when it is clear in which intelligence research situations conditional uncertainties can or cannot be handled intuitively, competitive intelligence education and practice should be adapted to these findings.
Originality/value
CI practitioners can underestimate the value of Bayes' theorem in practice as they are often unaware of the (psychological) problems around handling conditional uncertainties intuitively. The article demonstrates how to take a computational approach to conditional uncertainties in CI projects. Thus, it can be used as part of appropriate CI training material.
Details
Keywords
A long Introduction provides a composite methodological standard of 25 elements (concepts, theorems and basic relationships) which actually represent in analysis a system of…
Abstract
A long Introduction provides a composite methodological standard of 25 elements (concepts, theorems and basic relationships) which actually represent in analysis a system of general stable equilibrium in economics and other social sciences. In practice, the same composite standard refers to a possible regime of a free, just and stable economy and society. This double composite scientific objective standard was used to examine the content of the Memorial Lectures presented by nine Laureates who received the Nobel Prize in Economics from 1969 to 1974. Specifically, the purpose was to see how much these lectures have contributed to the clarification and the solution of the major problems of our time.
Details
Keywords
This chapter provides an alternative interpretation of the emergence of the “Ramsey-Cass-Koopmans” growth model, a framework which, alongside the overlapping generation model, is…
Abstract
This chapter provides an alternative interpretation of the emergence of the “Ramsey-Cass-Koopmans” growth model, a framework which, alongside the overlapping generation model, is the dominant approach in today’s macroeconomics. By focusing on the role Paul Samuelson played through the works he developed in the turnpike literature, the author’s goal is to provide a more accurate history of growth theory of the 1940–1960s, one which started before Solow (1956) but never had him as a central reference. Inspired by John von Neumann’s famous 1945 article, Samuelson wrote his first turnpike paper by trying to conjecture an alternative optimal growth path (Samuelson, 1949 [1966]). In the 1960s, after reformulating the intertemporal utility model presented in Ramsey (1928), Samuelson began to propound it as a representative agent model. Through Samuelson’s interactions with colleagues and PhD students at the Massachusetts Institute of Technology (MIT), and given his standing in the profession, he encouraged a broader use of that device in macroeconomics, particularly, in growth theory. With the publication of Samuelson (1965), Tjalling Koopmans and Lionel McKenzie rewrote their own articles in order to account for the new approach. This work complements a recently written account on growth theory by Assaf and Duarte (2018).
Details
Keywords
Yangin Fan and Emmanuel Guerre
The asymptotic bias and variance of a general class of local polynomial estimators of M-regression functions are studied over the whole compact support of the multivariate…
Abstract
The asymptotic bias and variance of a general class of local polynomial estimators of M-regression functions are studied over the whole compact support of the multivariate covariate under a minimal assumption on the support. The support assumption ensures that the vicinity of the boundary of the support will be visited by the multivariate covariate. The results show that like in the univariate case, multivariate local polynomial estimators have good bias and variance properties near the boundary. For the local polynomial regression estimator, we establish its asymptotic normality near the boundary and the usual optimal uniform convergence rate over the whole support. For local polynomial quantile regression, we establish a uniform linearization result which allows us to obtain similar results to the local polynomial regression. We demonstrate both theoretically and numerically that with our uniform results, the common practice of trimming local polynomial regression or quantile estimators to avoid “the boundary effect” is not needed.
Details