Search results1 – 10 of 245
The purpose of this paper is to support the use of unique identifiers for the authors of scientific publications. This, the authors believe, aligns with the views of many…
The purpose of this paper is to support the use of unique identifiers for the authors of scientific publications. This, the authors believe, aligns with the views of many others, as it would solve the problem of author disambiguation. If every researcher had a unique identifier, there would be significant opportunities to provide even more services. These extensions are proposed in this paper.
The authors discuss the bibliographic services that are currently available. This leads to a discussion of how these services could be developed and extended.
The authors suggest a number of ways that a unique identifier for scientific authors could support many other areas of importance to the scientific community. This will provide a much more robust system that provides a much richer and more easily maintained, scientific environment.
The scientific community lags behind most other communities with regard to the way it identifies individuals. Even if the current vision for a unique identifier for authors was to become more widespread, there would still be many areas where the community could improve its operations. This viewpoint paper suggests some of these, along with a financial model that could underpin the functionality.
Artificial intelligence is a consortium of data-driven methodologies which includes artificial neural networks, genetic algorithms, fuzzy logic, probabilistic belief networks and machine learning as its components. We have witnessed a phenomenal impact of this data-driven consortium of methodologies in many areas of studies, the economic and financial fields being of no exception. In particular, this volume of collected works will give examples of its impact on the field of economics and finance. This volume is the result of the selection of high-quality papers presented at a special session entitled “Applications of Artificial Intelligence in Economics and Finance” at the “2003 International Conference on Artificial Intelligence” (IC-AI ’03) held at the Monte Carlo Resort, Las Vegas, NV, USA, June 23–26 2003. The special session, organised by Jane Binner, Graham Kendall and Shu-Heng Chen, was presented in order to draw attention to the tremendous diversity and richness of the applications of artificial intelligence to problems in Economics and Finance. This volume should appeal to economists interested in adopting an interdisciplinary approach to the study of economic problems, computer scientists who are looking for potential applications of artificial intelligence and practitioners who are looking for new perspectives on how to build models for everyday operations.
This work applies state-of-the-art artificial intelligence forecasting methods to provide new evidence of the comparative performance of statistically weighted Divisia…
This work applies state-of-the-art artificial intelligence forecasting methods to provide new evidence of the comparative performance of statistically weighted Divisia indices vis-à-vis their simple sum counterparts in a simple inflation forecasting experiment. We develop a new approach that uses co-evolution (using neural networks and evolutionary strategies) as a predictive tool. This approach is simple to implement yet produces results that outperform stand-alone neural network predictions. Results suggest that superior tracking of inflation is possible for models that employ a Divisia M2 measure of money that has been adjusted to incorporate a learning mechanism to allow individuals to gradually alter their perceptions of the increased productivity of money. Divisia measures of money outperform their simple sum counterparts as macroeconomic indicators.
This study applies VAR and ANN techniques to make ex-post forecast of U.S. oil price movements. The VAR-based forecast uses three endogenous variables: lagged oil price…
This study applies VAR and ANN techniques to make ex-post forecast of U.S. oil price movements. The VAR-based forecast uses three endogenous variables: lagged oil price, lagged oil supply and lagged energy consumption. However, the VAR model suggests that the impacts of oil supply and energy consumption has limited impacts on oil price movement. The forecast of the genetic algorithm-based ANN model is made by using oil supply, energy consumption, and money supply (M1). Root mean squared error and mean absolute error have been used as the evaluation criteria. Our analysis suggests that the BPN-GA model noticeably outperforms the VAR model.
Given the recent explosion of interest in streaming data and online algorithms, clustering of time series subsequences has received much attention. In this work we make a…
Given the recent explosion of interest in streaming data and online algorithms, clustering of time series subsequences has received much attention. In this work we make a surprising claim. Clustering of time series subsequences is completely meaningless. More concretely, clusters extracted from these time series are forced to obey a certain constraint that is pathologically unlikely to be satisfied by any dataset, and because of this, the clusters extracted by any clustering algorithm are essentially random. While this constraint can be intuitively demonstrated with a simple illustration and is simple to prove, it has never appeared in the literature. We can justify calling our claim surprising, since it invalidates the contribution of dozens of previously published papers. We will justify our claim with a theorem, illustrative examples, and a comprehensive set of experiments on reimplementations of previous work.
Are the learning procedures of genetic algorithms (GAs) able to generate optimal architectures for artificial neural networks (ANNs) in high frequency data? In this…
Are the learning procedures of genetic algorithms (GAs) able to generate optimal architectures for artificial neural networks (ANNs) in high frequency data? In this experimental study, GAs are used to identify the best architecture for ANNs. Additional learning is undertaken by the ANNs to forecast daily excess stock returns. No ANN architectures were able to outperform a random walk, despite the finding of non-linearity in the excess returns. This failure is attributed to the absence of suitable ANN structures and further implies that researchers need to be cautious when making inferences from ANN results that use high frequency data.
Divisia component data is used in the training of an Aggregate Feedforward Neural Network (AFFNN), a general-purpose connectionist system designed to assist with data…
Divisia component data is used in the training of an Aggregate Feedforward Neural Network (AFFNN), a general-purpose connectionist system designed to assist with data mining activities. The neural network is able to learn the money-price relationship, defined as the relationships between the rate of growth of the money supply and inflation. Learned relationships are expressed in terms of an automatically generated series of human-readable and machine-executable rules, shown to meaningfully and accurately describe inflation in terms of the original values of the Divisia component dataset.
In this paper we show, by means of an example of its application to the problem of house price forecasting, an approach to attribute selection and dependence modelling…
In this paper we show, by means of an example of its application to the problem of house price forecasting, an approach to attribute selection and dependence modelling utilising the Gamma Test (GT), a non-linear analysis algorithm that is described. The GT is employed in a two-stage process: first the GT drives a Genetic Algorithm (GA) to select a useful subset of features from a large dataset that we develop from eight economic statistical series of historical measures that may impact upon house price movement. Next we generate a predictive model utilising an Artificial Neural Network (ANN) trained to the Mean Squared Error (MSE) estimated by the GT, which accurately forecasts changes in the House Price Index (HPI). We present a background to the problem domain and demonstrate, based on results of this methodology, that the GT was of great utility in facilitating a GA based approach to extracting a sound predictive model from a large number of inputs in a data-point sparse real-world application.