Search results
1 – 10 of 60K. Shailendra, R.N. Neogi and K.L. Gogia
The International Centre in Paris of the International Serials Data System (ISDS) maintains the database of world serial publications and functions through a network of national…
Abstract
The International Centre in Paris of the International Serials Data System (ISDS) maintains the database of world serial publications and functions through a network of national and regional centres in various countries. ISDS is an intergovernmental organization established within the framework of the Unesco‐UNISIST programme. The Indian National Centre for ISDS was set up in January 1986 at the Indian National Scientific Documentation Centre (INSDOC), New Delhi, for identification, registration, creation and maintenance of records of serial publications published in India, as well as for monitoring and promoting the use of International Standard Serial Numbers (ISSN). So far, printed Data Transmittal Sheets (DTS) have been used by this centre to send data of serial publications to be incorporated in the ISDS database at the International Centre (IC). Now the Indian centre has developed a computerised system by which the data of serial publications can be transferred directly onto computer designed DTS. The database so created has also been used to produce ISDS‐India Bulletin which describes the collection of records of serials published in India.
The purpose of this study aims to synthesize a novel donor–acceptor dye based on phenothiazine as a donor (D) and nonconjugated spacer was devised and synthesized by condensing of…
Abstract
Purpose
The purpose of this study aims to synthesize a novel donor–acceptor dye based on phenothiazine as a donor (D) and nonconjugated spacer was devised and synthesized by condensing of 2,2'-(1H-indene-1,3(2H)-diylidene) dimalononitrile with aldehyde and the practical synthesis methodology as given in Scheme 1.
Design/methodology/approach
The prepared phenothiazine dye was systematically experimentally and theoretically examined and characterized using nuclear magnetic resonance spectroscopy (1H,13C NMR), Fourier-transform infrared spectroscopy (IR) and high-resolution mass spectrometry. Density functional theory (DFT) and time-dependent density functional theory DT-DFT calculations were implemented to determine the electronic properties of the new dye
Findings
The UV-Vis absorption and fluorescence spectroscopy of the synthesized dye was investigated in a variety of solvents with varying polarities to demonstrate positive solvatochromism correlated with intramolecular charge transfer (ICT). The probe’s quantum yields (Фf) are experimentally measured in ethanol, and the Stokes shifts are found to be in the 4846–9430 cm−1 range.
Originality/value
The findings depicted that the novel (D-π-A) chromophores may act as a significant factor in the organic optoelectronics.
Details
Keywords
Louis Chauvel, Anne Hartung, Eyal Bar-Haim and Philippe Van Kerm
The study of the upper tail of the income and wealth distributions is important to the understanding of economic inequality. By means of the ‘isograph’, a new tool to describe…
Abstract
The study of the upper tail of the income and wealth distributions is important to the understanding of economic inequality. By means of the ‘isograph’, a new tool to describe income or wealth distributions, the authors compare wealth and income and wealth-to-income ratios in 16 European countries and the United States using data for years 2013/2014 from the Eurozone Household Finance and Consumption Survey and the US Survey on Consumer Finance. Focussing on the top half of the distribution, the authors find that for households in the top income quintile, wealth-to-income ratios generally increase rapidly with income; the association between high wealth and high incomes is highest among the highest percentiles. There is generally a positive relationship between median wealth in the country and the wealth of the top 1%. However, the United States is an outlier where the median wealth is relatively low but the wealth of the top 1% is extremely high.
Details
Keywords
Do Tien Sy, Zwe Man Aung and Nguyen Thanh Viet
Claims and disputes are often unavoidable in the construction industry due to its unique and complex characteristics involving the massive investment of capital, lengthy project…
Abstract
Claims and disputes are often unavoidable in the construction industry due to its unique and complex characteristics involving the massive investment of capital, lengthy project duration, and multiple project stakeholders. This chapter intends to identify the critical construction claims attributes, compare the perceptions of major stakeholders on different claim attributes, and investigate the contrast of the top five claim attributes between this study and previous ones. The literature review resulted in 48 claim attributes responsible for the construction project schedule delays. These attributes were then presented to Vietnam construction industry (VCI) practitioners in the form of a questionnaire survey. Data analysis was done based on the collected 113 qualified samples. Relative importance index (RII) was applied to determine the ranking of claim attributes. The results were that the top five causes of claims, that is, payment delays, mistakes by contractor during construction stage, delays in work progress by the contractor, financial failure of the contractor, and frequently changing requirements by the owner, lead to the schedule delays in VCI. These findings can assist the local industry practitioners and foreign companies seeking a share in the VCI market in understanding the causes of construction claims comprehensively and formulating the countermeasures to minimise their impacts and hence reduce the unnecessary losses and raise the likelihood of success as well as maintain sustainable relationships among stakeholders.
Details
Keywords
Aswini Kumar Mishra, Anil Kumar and Abhishek Sinha
Though Indian economy since 1980s has expanded very rapidly, yet the benefits of growth remain very unequally distributed. The purpose of this paper is to provide new evidence…
Abstract
Purpose
Though Indian economy since 1980s has expanded very rapidly, yet the benefits of growth remain very unequally distributed. The purpose of this paper is to provide new evidence about the shape, intensity and decomposition of inequality change between 2005 and 2012. The authors find that Gini, as a measure of income inequality, has increased irrespective of geographic regions.
Design/methodology/approach
Based on a recent distribution analysis tool, “ABG,” the paper focuses on local inequality, and summarizes the shape of inequality in terms of three inequality parameters (α, β and γ) to examine how the income distributions have changed over time. Here, the central coefficient (α) measures inequality at the median level, with adjustment parameters at the top (β) and bottom (γ).
Findings
The results reveal that at the middle of distribution (α), there is almost the same inequality in both the periods, but the coefficients on the curvature parameters β and γ show that there is increasing inequality in the subsequent period. Finally, an analysis of decomposition of inequality change suggests that though income growth was progressive, however, this equalizing effect was more than offset by the disequalizing effect of income reranking.
Research limitations/implications
This paper shows how it can be possible both for “the poor” to fare badly relatively to “the rich” and for income growth to be pro-poor.
Practical implications
This paper stresses the significance of inequality reduction.
Social implications
Inequality reduction is very much imperative in ending poverty and boosting shared prosperity.
Originality/value
Perhaps, this research work is first of its kind to examine the shape and decomposition of change in income inequality in India in recent years.
Details
Keywords
Sara Antomarioni, Filippo Emanuele Ciarapica and Maurizio Bevilacqua
The research approach is based on the concept that a failure event is rarely random and is often generated by a chain of previous events connected by a sort of domino effect…
Abstract
Purpose
The research approach is based on the concept that a failure event is rarely random and is often generated by a chain of previous events connected by a sort of domino effect. Thus, the purpose of this study is the optimal selection of the components to predictively maintain on the basis of their failure probability, under budget and time constraints.
Design/methodology/approach
Assets maintenance is a major challenge for any process industry. Thanks to the development of Big Data Analytics techniques and tools, data produced by such systems can be analyzed in order to predict their behavior. Considering the asset as a social system composed of several interacting components, in this work, a framework is developed to identify the relationships between component failures and to avoid them through the predictive replacement of critical ones: such relationships are identified through the Association Rule Mining (ARM), while their interaction is studied through the Social Network Analysis (SNA).
Findings
A case example of a process industry is presented to explain and test the proposed model and to discuss its applicability. The proposed framework provides an approach to expand upon previous work in the areas of prediction of fault events and monitoring strategy of critical components.
Originality/value
The novel combined adoption of ARM and SNA is proposed to identify the hidden interaction among events and to define the nature of such interactions and communities of nodes in order to analyze local and global paths and define the most influential entities.
Details
Keywords
To discuss subcopula estimation for discrete models.
Abstract
Purpose
To discuss subcopula estimation for discrete models.
Design/methodology/approach
The convergence of estimators is considered under the weak convergence of distribution functions and its equivalent properties known in prior works.
Findings
The domain of the true subcopula associated with discrete random variables is found to be discrete on the interior of the unit hypercube. The construction of an estimator in which their domains have the same form as that of the true subcopula is provided, in case, the marginal distributions are binomial.
Originality/value
To the best of our knowledge, this is the first time such an estimator is defined and proved to be converged to the true subcopula.
Details
Keywords
Many prior tests of market efficiency, which occurred decades ago, were limited by data and did not employ methodology to correct for leptokurtosis in the stock return…
Abstract
Purpose
Many prior tests of market efficiency, which occurred decades ago, were limited by data and did not employ methodology to correct for leptokurtosis in the stock return distribution. Furthermore, these studies did not test many aspects of conditional market efficiency. One aspect of a potential conditional violation of market efficiency is whether stock markets are efficient conditional on the level of stock return.
Design/methodology/approach
This paper uses quantile regressions to control for leptokurtosis in the stock return distribution and simultaneous quantile regressions to test whether markets are efficient conditional on the level of the market return. This paper uses market-level stock return data to bias against finding significant results in the efficiency tests. Furthermore, the author uses data from 1926 through 2018, providing the longest time period to date under which market efficiency is tested.
Findings
This paper presents evidence that the autoregressive coefficient decreases across return levels in stock market indices. The autoregressive coefficient is positive around highly negative returns and negative or insignificant around highly positive returns, which suggests that when stock returns are low they are more likely to continue lower, and when stock returns are high they are more likely to reverse. Results additionally suggest that market efficiency is not time-invariant and that stock markets have become more efficient over the sample period.
Originality/value
This paper extends the literature by finding evidence of a violation of weak-form market efficiency conditional on the level of stock returns. It further extends the literature by finding evidence that the stock market has become more efficient between 1926 and 2018.
Details
Keywords
Ingo Hoffmann and Christoph J. Börner
This paper aims to evaluate the accuracy of a quantile estimate. Especially when estimating high quantiles from a few data, the quantile estimator itself is a random number with…
Abstract
Purpose
This paper aims to evaluate the accuracy of a quantile estimate. Especially when estimating high quantiles from a few data, the quantile estimator itself is a random number with its own distribution. This distribution is first determined and then it is shown how the accuracy of the quantile estimation can be assessed in practice.
Design/methodology/approach
The paper considers the situation that the parent distribution of the data is unknown, the tail is modeled with the generalized pareto distribution and the quantile is finally estimated using the fitted tail model. Based on well-known theoretical preliminary studies, the finite sample distribution of the quantile estimator is determined and the accuracy of the estimator is quantified.
Findings
In general, the algebraic representation of the finite sample distribution of the quantile estimator was found. With the distribution, all statistical quantities can be determined. In particular, the expected value, the variance and the bias of the quantile estimator are calculated to evaluate the accuracy of the estimation process. Scaling laws could be derived and it turns out that with a fat tail and few data, the bias and the variance increase massively.
Research limitations/implications
Currently, the research is limited to the form of the tail, which is interesting for the financial sector. Future research might consider problems where the tail has a finite support or the tail is over-fat.
Practical implications
The ability to calculate error bands and the bias for the quantile estimator is equally important for financial institutions, as well as regulators and auditors.
Originality/value
Understanding the quantile estimator as a random variable and analyzing and evaluating it based on its distribution gives researchers, regulators, auditors and practitioners new opportunities to assess risk.
Details
Keywords
Libraries need to develop information processing systems for evaluation, budgeting, planning, and operations. Electronic spreadsheets lend themselves to a variety of applications…
Abstract
Libraries need to develop information processing systems for evaluation, budgeting, planning, and operations. Electronic spreadsheets lend themselves to a variety of applications, but are time‐consuming to create. A model template and macros that can be used in many different types of library data analysis have been developed here. The procedures demonstrated here can build an essential set of tools for meeting fundamental goals of administrative efficiency, effective use of library resources, staff motivation, and rational policy making.