Search results

1 – 10 of over 221000
Article
Publication date: 13 December 2018

Thomas Belz, Dominik von Hagen and Christian Steffens

Using a meta-regression analysis, we quantitatively review the empirical literature on the relation between effective tax rate (ETR) and firm size. Accounting literature offers…

Abstract

Using a meta-regression analysis, we quantitatively review the empirical literature on the relation between effective tax rate (ETR) and firm size. Accounting literature offers two competing theories on this relation: The political cost theory, suggesting a positive size-ETR relation, and the political power theory, suggesting a negative size-ETR relation. Using a unique data set of 56 studies that do not show a clear tendency towards either of the two theories, we contribute to the discussion on the size-ETR relation in three ways: First, applying meta-regression analysis on a US meta-data set, we provide evidence supporting the political cost theory. Second, our analysis reveals factors that are possible sources of variation and bias in previous empirical studies; these findings can improve future empirical and analytical models. Third, we extend our analysis to a cross-country meta-data set; this extension enables us to investigate explanations for the two competing theories in more detail. We find that Hofstede’s cultural dimensions theory, a transparency index and a corruption index explain variation in the size-ETR relation. Independent of the two theories, we also find that tax planning aspects potentially affect the size-ETR relation. To our knowledge, these explanations have not yet been investigated in our research context.

Details

Journal of Accounting Literature, vol. 42 no. 1
Type: Research Article
ISSN: 0737-4607

Keywords

Book part
Publication date: 29 August 2018

Paul A. Pautler

The Bureau of Economics in the Federal Trade Commission has a three-part role in the Agency and the strength of its functions changed over time depending on the preferences and…

Abstract

The Bureau of Economics in the Federal Trade Commission has a three-part role in the Agency and the strength of its functions changed over time depending on the preferences and ideology of the FTC’s leaders, developments in the field of economics, and the tenor of the times. The over-riding current role is to provide well considered, unbiased economic advice regarding antitrust and consumer protection law enforcement cases to the legal staff and the Commission. The second role, which long ago was primary, is to provide reports on investigations of various industries to the public and public officials. This role was more recently called research or “policy R&D”. A third role is to advocate for competition and markets both domestically and internationally. As a practical matter, the provision of economic advice to the FTC and to the legal staff has required that the economists wear “two hats,” helping the legal staff investigate cases and provide evidence to support law enforcement cases while also providing advice to the legal bureaus and to the Commission on which cases to pursue (thus providing “a second set of eyes” to evaluate cases). There is sometimes a tension in those functions because building a case is not the same as evaluating a case. Economists and the Bureau of Economics have provided such services to the FTC for over 100 years proving that a sub-organization can survive while playing roles that sometimes conflict. Such a life is not, however, always easy or fun.

Details

Healthcare Antitrust, Settlements, and the Federal Trade Commission
Type: Book
ISBN: 978-1-78756-599-9

Keywords

Article
Publication date: 19 October 2015

Eugene Ch'ng

The purpose of this paper is to present a Big Data solution as a methodological approach to the automated collection, cleaning, collation, and mapping of multimodal, longitudinal…

Abstract

Purpose

The purpose of this paper is to present a Big Data solution as a methodological approach to the automated collection, cleaning, collation, and mapping of multimodal, longitudinal data sets from social media. The paper constructs social information landscapes (SIL).

Design/methodology/approach

The research presented here adopts a Big Data methodological approach for mapping user-generated contents in social media. The methodology and algorithms presented are generic, and can be applied to diverse types of social media or user-generated contents involving user interactions, such as within blogs, comments in product pages, and other forms of media, so long as a formal data structure proposed here can be constructed.

Findings

The limited presentation of the sequential nature of content listings within social media and Web 2.0 pages, as viewed on web browsers or on mobile devices, do not necessarily reveal nor make obvious an unknown nature of the medium; that every participant, from content producers, to consumers, to followers and subscribers, including the contents they produce or subscribed to, are intrinsically connected in a hidden but massive network. Such networks when mapped, could be quantitatively analysed using social network analysis (e.g. centralities), and the semantics and sentiments could equally reveal valuable information with appropriate analytics. Yet that which is difficult is the traditional approach of collecting, cleaning, collating, and mapping such data sets into a sufficiently large sample of data that could yield important insights into the community structure and the directional, and polarity of interaction on diverse topics. This research solves this particular strand of problem.

Research limitations/implications

The automated mapping of extremely large networks involving hundreds of thousands to millions of nodes, encapsulating high resolution and contextual information, over a long period of time could possibly assist in the proving or even disproving of theories. The goal of this paper is to demonstrate the feasibility of using automated approaches for acquiring massive, connected data sets for academic inquiry in the social sciences.

Practical implications

The methods presented in this paper, together with the Big Data architecture can assist individuals and institutions with a limited budget, with practical approaches in constructing SIL. The software-hardware integrated architecture uses open source software, furthermore, the SIL mapping algorithms are easy to implement.

Originality/value

The majority of research in the literature uses traditional approaches for collecting social networks data. Traditional approaches can be slow and tedious; they do not yield adequate sample size to be of significant value for research. Whilst traditional approaches collect only a small percentage of data, the original methods presented here are able to collect and collate entire data sets in social media due to the automated and scalable mapping techniques.

Details

Industrial Management & Data Systems, vol. 115 no. 9
Type: Research Article
ISSN: 0263-5577

Keywords

Article
Publication date: 29 August 2019

Vivekanand Venkataraman, Syed Usmanulla, Appaiah Sonnappa, Pratiksha Sadashiv, Suhaib Soofi Mohammed and Sundaresh S. Narayanan

The purpose of this paper is to identify significant factors of environmental variables and pollutants that have an effect on PM2.5 through wavelet and regression analysis.

Abstract

Purpose

The purpose of this paper is to identify significant factors of environmental variables and pollutants that have an effect on PM2.5 through wavelet and regression analysis.

Design/methodology/approach

In order to provide stable data set for regression analysis, multiresolution analysis using wavelets is conducted. For the sampled data, multicollinearity among the independent variables is removed by using principal component analysis and multiple linear regression analysis is conducted using PM2.5 as a dependent variable.

Findings

It is found that few pollutants such as NO2, NOx, SO2, benzene and environmental factors such as ambient temperature, solar radiation and wind direction affect PM2.5. The regression model developed has high R2 value of 91.9 percent, and the residues are stationary and not correlated indicating a sound model.

Research limitations/implications

The research provides a framework for extracting stationary data and other important features such as change points in mean and variance, using the sample data for regression analysis. The work needs to be extended across all areas in India and for various other stationary data sets there can be different factors affecting PM2.5.

Practical implications

Control measures such as control charts can be implemented for significant factors.

Social implications

Rules and regulations can be made more stringent on the factors.

Originality/value

The originality of this paper lies in the integration of wavelets with regression analysis for air pollution data.

Details

International Journal of Quality & Reliability Management, vol. 36 no. 10
Type: Research Article
ISSN: 0265-671X

Keywords

Article
Publication date: 13 December 2019

Yang Li and Xuhua Hu

The purpose of this paper is to solve the problem of information privacy and security of social users. Mobile internet and social network are more and more deeply integrated into…

Abstract

Purpose

The purpose of this paper is to solve the problem of information privacy and security of social users. Mobile internet and social network are more and more deeply integrated into people’s daily life, especially under the interaction of the fierce development momentum of the Internet of Things and diversified personalized services, more and more private information of social users is exposed to the network environment actively or unintentionally. In addition, a large amount of social network data not only brings more benefits to network application providers, but also provides motivation for malicious attackers. Therefore, under the social network environment, the research on the privacy protection of user information has great theoretical and practical significance.

Design/methodology/approach

In this study, based on the social network analysis, combined with the attribute reduction idea of rough set theory, the generalized reduction concept based on multi-level rough set from the perspectives of positive region, information entropy and knowledge granularity of rough set theory were proposed. Furthermore, it was traversed on the basis of the hierarchical compatible granularity space of the original information system and the corresponding attribute values are coarsened. The selected test data sets were tested, and the experimental results were analyzed.

Findings

The results showed that the algorithm can guarantee the anonymity requirement of data publishing and improve the effect of classification modeling on anonymous data in social network environment.

Research limitations/implications

In the test and verification of privacy protection algorithm and privacy protection scheme, the efficiency of algorithm and scheme needs to be tested on a larger data scale. However, the data in this study are not enough. In the following research, more data will be used for testing and verification.

Practical implications

In the context of social network, the hierarchical structure of data is introduced into rough set theory as domain knowledge by referring to human granulation cognitive mechanism, and rough set modeling for complex hierarchical data is studied for hierarchical data of decision table. The theoretical research results are applied to hierarchical decision rule mining and k-anonymous privacy protection data mining research, which enriches the connotation of rough set theory and has important theoretical and practical significance for further promoting the application of this theory. In addition, combined the theory of secure multi-party computing and the theory of attribute reduction in rough set, a privacy protection feature selection algorithm for multi-source decision table is proposed, which solves the privacy protection problem of feature selection in distributed environment. It provides a set of effective rough set feature selection method for privacy protection classification mining in distributed environment, which has practical application value for promoting the development of privacy protection data mining.

Originality/value

In this study, the proposed algorithm and scheme can effectively protect the privacy of social network data, ensure the availability of social network graph structure and realize the need of both protection and sharing of user attributes and relational data.

Details

Library Hi Tech, vol. 40 no. 1
Type: Research Article
ISSN: 0737-8831

Keywords

Article
Publication date: 9 September 2014

Josep Maria Brunetti and Roberto García

The growing volumes of semantic data available in the web result in the need for handling the information overload phenomenon. The potential of this amount of data is enormous but…

Abstract

Purpose

The growing volumes of semantic data available in the web result in the need for handling the information overload phenomenon. The potential of this amount of data is enormous but in most cases it is very difficult for users to visualize, explore and use this data, especially for lay-users without experience with Semantic Web technologies. The paper aims to discuss these issues.

Design/methodology/approach

The Visual Information-Seeking Mantra “Overview first, zoom and filter, then details-on-demand” proposed by Shneiderman describes how data should be presented in different stages to achieve an effective exploration. The overview is the first user task when dealing with a data set. The objective is that the user is capable of getting an idea about the overall structure of the data set. Different information architecture (IA) components supporting the overview tasks have been developed, so they are automatically generated from semantic data, and evaluated with end-users.

Findings

The chosen IA components are well known to web users, as they are present in most web pages: navigation bars, site maps and site indexes. The authors complement them with Treemaps, a visualization technique for displaying hierarchical data. These components have been developed following an iterative User-Centered Design methodology. Evaluations with end-users have shown that they get easily used to them despite the fact that they are generated automatically from structured data, without requiring knowledge about the underlying semantic technologies, and that the different overview components complement each other as they focus on different information search needs.

Originality/value

Obtaining semantic data sets overviews cannot be easily done with the current semantic web browsers. Overviews become difficult to achieve with large heterogeneous data sets, which is typical in the Semantic Web, because traditional IA techniques do not easily scale to large data sets. There is little or no support to obtain overview information quickly and easily at the beginning of the exploration of a new data set. This can be a serious limitation when exploring a data set for the first time, especially for lay-users. The proposal is to reuse and adapt existing IA components to provide this overview to users and show that they can be generated automatically from the thesaurus and ontologies that structure semantic data while providing a comparable user experience to traditional web sites.

Details

Aslib Journal of Information Management, vol. 66 no. 5
Type: Research Article
ISSN: 2050-3806

Keywords

Abstract

Many jurisdictions fine illegal cartels using penalty guidelines that presume an arbitrary 10% overcharge. This article surveys more than 700 published economic studies and judicial decisions that contain 2,041 quantitative estimates of overcharges of hard-core cartels. The primary findings are: (1) the median average long-run overcharge for all types of cartels over all time periods is 23.0%; (2) the mean average is at least 49%; (3) overcharges reached their zenith in 1891–1945 and have trended downward ever since; (4) 6% of the cartel episodes are zero; (5) median overcharges of international-membership cartels are 38% higher than those of domestic cartels; (6) convicted cartels are on average 19% more effective at raising prices as unpunished cartels; (7) bid-rigging conduct displays 25% lower markups than price-fixing cartels; (8) contemporary cartels targeted by class actions have higher overcharges; and (9) when cartels operate at peak effectiveness, price changes are 60–80% higher than the whole episode. Historical penalty guidelines aimed at optimally deterring cartels are likely to be too low.

Details

The Law and Economics of Class Actions
Type: Book
ISBN: 978-1-78350-951-5

Keywords

Book part
Publication date: 3 June 2008

Nathaniel T. Wilcox

Choice under risk has a large stochastic (unpredictable) component. This chapter examines five stochastic models for binary discrete choice under risk and how they combine with…

Abstract

Choice under risk has a large stochastic (unpredictable) component. This chapter examines five stochastic models for binary discrete choice under risk and how they combine with “structural” theories of choice under risk. Stochastic models are substantive theoretical hypotheses that are frequently testable in and of themselves, and also identifying restrictions for hypothesis tests, estimation and prediction. Econometric comparisons suggest that for the purpose of prediction (as opposed to explanation), choices of stochastic models may be far more consequential than choices of structures such as expected utility or rank-dependent utility.

Details

Risk Aversion in Experiments
Type: Book
ISBN: 978-1-84950-547-5

Article
Publication date: 9 August 2021

Vyacheslav I. Zavalin and Shawne D. Miksa

This paper aims to discuss the challenges encountered in collecting, cleaning and analyzing the large data set of bibliographic metadata records in machine-readable cataloging…

Abstract

Purpose

This paper aims to discuss the challenges encountered in collecting, cleaning and analyzing the large data set of bibliographic metadata records in machine-readable cataloging [MARC 21] format. Possible solutions are presented.

Design/methodology/approach

This mixed method study relied on content analysis and social network analysis. The study examined subject representation in MARC 21 metadata records created in 2020 in WorldCat – the largest international database of “big smart data.” The methodological challenges that were encountered and solutions are examined.

Findings

In this general review paper with a focus on methodological issues, the discussion of challenges is followed by a discussion of solutions developed and tested as part of this study. Data collection, processing, analysis and visualization are addressed separately. Lessons learned and conclusions related to challenges and solutions for the design of a large-scale study evaluating MARC 21 bibliographic metadata from WorldCat are given. Overall recommendations for the design and implementation of future research are suggested.

Originality/value

There are no previous publications that address the challenges and solutions of data collection and analysis of WorldCat’s “big smart data” in the form of MARC 21 data. This is the first study to use a large data set to systematically examine MARC 21 library metadata records created after the most recent addition of new fields and subfields to MARC 21 Bibliographic Format standard in 2019 based on resource description and access rules. It is also the first to focus its analyzes on the networks formed by subject terms shared by MARC 21 bibliographic records in a data set extracted from a heterogeneous centralized database WorldCat.

Details

The Electronic Library , vol. 39 no. 3
Type: Research Article
ISSN: 0264-0473

Keywords

Article
Publication date: 11 September 2007

Linn Marks Collins, Jeremy A.T. Hussell, Robert K. Hettinga, James E. Powell, Ketan K. Mane and Mark L.B. Martinez

To describe how information visualization can be used in the design of interface tools for large‐scale repositories.

Abstract

Purpose

To describe how information visualization can be used in the design of interface tools for large‐scale repositories.

Design/methodology/approach

One challenge for designers in the context of large‐scale repositories is to create interface tools that help users find specific information of interest. In order to be most effective, these tools need to leverage the cognitive characteristics of the target users. At the Los Alamos National Laboratory, the authors' target users are scientists and engineers who can be characterized as higher‐order, analytical thinkers. In this paper, the authors describe a visualization tool they have created for making the authors' large‐scale digital object repositories more usable for them: SearchGraph, which facilitates data set analysis by displaying search results in the form of a two‐ or three‐dimensional interactive scatter plot.

Findings

Using SearchGraph, users can view a condensed, abstract visualization of search results. They can view the same dataset from multiple perspectives by manipulating several display, sort, and filter options. Doing so allows them to see different patterns in the dataset. For example, they can apply a logarithmic transformation in order to create more scatter in a dense cluster of data points or they can apply filters in order to focus on a specific subset of data points.

Originality/value

SearchGraph is a creative solution to the problem of how to design interface tools for large‐scale repositories. It is particularly appropriate for the authors' target users, who are scientists and engineers. It extends the work of the first two authors on ActiveGraph, a read‐write digital library visualization tool.

Details

Library Hi Tech, vol. 25 no. 3
Type: Research Article
ISSN: 0737-8831

Keywords

1 – 10 of over 221000