Search results

1 – 10 of over 172000
To view the access options for this content please click here
Book part
Publication date: 29 August 2018

Paul A. Pautler

The Bureau of Economics in the Federal Trade Commission has a three-part role in the Agency and the strength of its functions changed over time depending on the…

Abstract

The Bureau of Economics in the Federal Trade Commission has a three-part role in the Agency and the strength of its functions changed over time depending on the preferences and ideology of the FTC’s leaders, developments in the field of economics, and the tenor of the times. The over-riding current role is to provide well considered, unbiased economic advice regarding antitrust and consumer protection law enforcement cases to the legal staff and the Commission. The second role, which long ago was primary, is to provide reports on investigations of various industries to the public and public officials. This role was more recently called research or “policy R&D”. A third role is to advocate for competition and markets both domestically and internationally. As a practical matter, the provision of economic advice to the FTC and to the legal staff has required that the economists wear “two hats,” helping the legal staff investigate cases and provide evidence to support law enforcement cases while also providing advice to the legal bureaus and to the Commission on which cases to pursue (thus providing “a second set of eyes” to evaluate cases). There is sometimes a tension in those functions because building a case is not the same as evaluating a case. Economists and the Bureau of Economics have provided such services to the FTC for over 100 years proving that a sub-organization can survive while playing roles that sometimes conflict. Such a life is not, however, always easy or fun.

Details

Healthcare Antitrust, Settlements, and the Federal Trade Commission
Type: Book
ISBN: 978-1-78756-599-9

Keywords

To view the access options for this content please click here
Article
Publication date: 19 October 2015

Eugene Ch'ng

The purpose of this paper is to present a Big Data solution as a methodological approach to the automated collection, cleaning, collation, and mapping of multimodal…

Abstract

Purpose

The purpose of this paper is to present a Big Data solution as a methodological approach to the automated collection, cleaning, collation, and mapping of multimodal, longitudinal data sets from social media. The paper constructs social information landscapes (SIL).

Design/methodology/approach

The research presented here adopts a Big Data methodological approach for mapping user-generated contents in social media. The methodology and algorithms presented are generic, and can be applied to diverse types of social media or user-generated contents involving user interactions, such as within blogs, comments in product pages, and other forms of media, so long as a formal data structure proposed here can be constructed.

Findings

The limited presentation of the sequential nature of content listings within social media and Web 2.0 pages, as viewed on web browsers or on mobile devices, do not necessarily reveal nor make obvious an unknown nature of the medium; that every participant, from content producers, to consumers, to followers and subscribers, including the contents they produce or subscribed to, are intrinsically connected in a hidden but massive network. Such networks when mapped, could be quantitatively analysed using social network analysis (e.g. centralities), and the semantics and sentiments could equally reveal valuable information with appropriate analytics. Yet that which is difficult is the traditional approach of collecting, cleaning, collating, and mapping such data sets into a sufficiently large sample of data that could yield important insights into the community structure and the directional, and polarity of interaction on diverse topics. This research solves this particular strand of problem.

Research limitations/implications

The automated mapping of extremely large networks involving hundreds of thousands to millions of nodes, encapsulating high resolution and contextual information, over a long period of time could possibly assist in the proving or even disproving of theories. The goal of this paper is to demonstrate the feasibility of using automated approaches for acquiring massive, connected data sets for academic inquiry in the social sciences.

Practical implications

The methods presented in this paper, together with the Big Data architecture can assist individuals and institutions with a limited budget, with practical approaches in constructing SIL. The software-hardware integrated architecture uses open source software, furthermore, the SIL mapping algorithms are easy to implement.

Originality/value

The majority of research in the literature uses traditional approaches for collecting social networks data. Traditional approaches can be slow and tedious; they do not yield adequate sample size to be of significant value for research. Whilst traditional approaches collect only a small percentage of data, the original methods presented here are able to collect and collate entire data sets in social media due to the automated and scalable mapping techniques.

Details

Industrial Management & Data Systems, vol. 115 no. 9
Type: Research Article
ISSN: 0263-5577

Keywords

To view the access options for this content please click here
Article
Publication date: 4 November 2019

Vivekanand Venkataraman, Syed Usmanulla, Appaiah Sonnappa, Pratiksha Sadashiv, Suhaib Soofi Mohammed and Sundaresh S. Narayanan

The purpose of this paper is to identify significant factors of environmental variables and pollutants that have an effect on PM2.5 through wavelet and regression analysis.

Abstract

Purpose

The purpose of this paper is to identify significant factors of environmental variables and pollutants that have an effect on PM2.5 through wavelet and regression analysis.

Design/methodology/approach

In order to provide stable data set for regression analysis, multiresolution analysis using wavelets is conducted. For the sampled data, multicollinearity among the independent variables is removed by using principal component analysis and multiple linear regression analysis is conducted using PM2.5 as a dependent variable.

Findings

It is found that few pollutants such as NO2, NOx, SO2, benzene and environmental factors such as ambient temperature, solar radiation and wind direction affect PM2.5. The regression model developed has high R2 value of 91.9 percent, and the residues are stationary and not correlated indicating a sound model.

Research limitations/implications

The research provides a framework for extracting stationary data and other important features such as change points in mean and variance, using the sample data for regression analysis. The work needs to be extended across all areas in India and for various other stationary data sets there can be different factors affecting PM2.5.

Practical implications

Control measures such as control charts can be implemented for significant factors.

Social implications

Rules and regulations can be made more stringent on the factors.

Originality/value

The originality of this paper lies in the integration of wavelets with regression analysis for air pollution data.

Details

International Journal of Quality & Reliability Management, vol. 36 no. 10
Type: Research Article
ISSN: 0265-671X

Keywords

To view the access options for this content please click here
Article
Publication date: 9 September 2014

Josep Maria Brunetti and Roberto García

The growing volumes of semantic data available in the web result in the need for handling the information overload phenomenon. The potential of this amount of data is…

Abstract

Purpose

The growing volumes of semantic data available in the web result in the need for handling the information overload phenomenon. The potential of this amount of data is enormous but in most cases it is very difficult for users to visualize, explore and use this data, especially for lay-users without experience with Semantic Web technologies. The paper aims to discuss these issues.

Design/methodology/approach

The Visual Information-Seeking Mantra “Overview first, zoom and filter, then details-on-demand” proposed by Shneiderman describes how data should be presented in different stages to achieve an effective exploration. The overview is the first user task when dealing with a data set. The objective is that the user is capable of getting an idea about the overall structure of the data set. Different information architecture (IA) components supporting the overview tasks have been developed, so they are automatically generated from semantic data, and evaluated with end-users.

Findings

The chosen IA components are well known to web users, as they are present in most web pages: navigation bars, site maps and site indexes. The authors complement them with Treemaps, a visualization technique for displaying hierarchical data. These components have been developed following an iterative User-Centered Design methodology. Evaluations with end-users have shown that they get easily used to them despite the fact that they are generated automatically from structured data, without requiring knowledge about the underlying semantic technologies, and that the different overview components complement each other as they focus on different information search needs.

Originality/value

Obtaining semantic data sets overviews cannot be easily done with the current semantic web browsers. Overviews become difficult to achieve with large heterogeneous data sets, which is typical in the Semantic Web, because traditional IA techniques do not easily scale to large data sets. There is little or no support to obtain overview information quickly and easily at the beginning of the exploration of a new data set. This can be a serious limitation when exploring a data set for the first time, especially for lay-users. The proposal is to reuse and adapt existing IA components to provide this overview to users and show that they can be generated automatically from the thesaurus and ontologies that structure semantic data while providing a comparable user experience to traditional web sites.

Details

Aslib Journal of Information Management, vol. 66 no. 5
Type: Research Article
ISSN: 2050-3806

Keywords

Abstract

Many jurisdictions fine illegal cartels using penalty guidelines that presume an arbitrary 10% overcharge. This article surveys more than 700 published economic studies and judicial decisions that contain 2,041 quantitative estimates of overcharges of hard-core cartels. The primary findings are: (1) the median average long-run overcharge for all types of cartels over all time periods is 23.0%; (2) the mean average is at least 49%; (3) overcharges reached their zenith in 1891–1945 and have trended downward ever since; (4) 6% of the cartel episodes are zero; (5) median overcharges of international-membership cartels are 38% higher than those of domestic cartels; (6) convicted cartels are on average 19% more effective at raising prices as unpunished cartels; (7) bid-rigging conduct displays 25% lower markups than price-fixing cartels; (8) contemporary cartels targeted by class actions have higher overcharges; and (9) when cartels operate at peak effectiveness, price changes are 60–80% higher than the whole episode. Historical penalty guidelines aimed at optimally deterring cartels are likely to be too low.

Details

The Law and Economics of Class Actions
Type: Book
ISBN: 978-1-78350-951-5

Keywords

To view the access options for this content please click here
Article
Publication date: 11 September 2007

Linn Marks Collins, Jeremy A.T. Hussell, Robert K. Hettinga, James E. Powell, Ketan K. Mane and Mark L.B. Martinez

To describe how information visualization can be used in the design of interface tools for large‐scale repositories.

Abstract

Purpose

To describe how information visualization can be used in the design of interface tools for large‐scale repositories.

Design/methodology/approach

One challenge for designers in the context of large‐scale repositories is to create interface tools that help users find specific information of interest. In order to be most effective, these tools need to leverage the cognitive characteristics of the target users. At the Los Alamos National Laboratory, the authors' target users are scientists and engineers who can be characterized as higher‐order, analytical thinkers. In this paper, the authors describe a visualization tool they have created for making the authors' large‐scale digital object repositories more usable for them: SearchGraph, which facilitates data set analysis by displaying search results in the form of a two‐ or three‐dimensional interactive scatter plot.

Findings

Using SearchGraph, users can view a condensed, abstract visualization of search results. They can view the same dataset from multiple perspectives by manipulating several display, sort, and filter options. Doing so allows them to see different patterns in the dataset. For example, they can apply a logarithmic transformation in order to create more scatter in a dense cluster of data points or they can apply filters in order to focus on a specific subset of data points.

Originality/value

SearchGraph is a creative solution to the problem of how to design interface tools for large‐scale repositories. It is particularly appropriate for the authors' target users, who are scientists and engineers. It extends the work of the first two authors on ActiveGraph, a read‐write digital library visualization tool.

Details

Library Hi Tech, vol. 25 no. 3
Type: Research Article
ISSN: 0737-8831

Keywords

To view the access options for this content please click here
Book part
Publication date: 3 June 2008

Nathaniel T. Wilcox

Choice under risk has a large stochastic (unpredictable) component. This chapter examines five stochastic models for binary discrete choice under risk and how they combine…

Abstract

Choice under risk has a large stochastic (unpredictable) component. This chapter examines five stochastic models for binary discrete choice under risk and how they combine with “structural” theories of choice under risk. Stochastic models are substantive theoretical hypotheses that are frequently testable in and of themselves, and also identifying restrictions for hypothesis tests, estimation and prediction. Econometric comparisons suggest that for the purpose of prediction (as opposed to explanation), choices of stochastic models may be far more consequential than choices of structures such as expected utility or rank-dependent utility.

Details

Risk Aversion in Experiments
Type: Book
ISBN: 978-1-84950-547-5

To view the access options for this content please click here
Article
Publication date: 13 December 2019

Yang Li and Xuhua Hu

The purpose of this paper is to solve the problem of information privacy and security of social users. Mobile internet and social network are more and more deeply…

Abstract

Purpose

The purpose of this paper is to solve the problem of information privacy and security of social users. Mobile internet and social network are more and more deeply integrated into people’s daily life, especially under the interaction of the fierce development momentum of the Internet of Things and diversified personalized services, more and more private information of social users is exposed to the network environment actively or unintentionally. In addition, a large amount of social network data not only brings more benefits to network application providers, but also provides motivation for malicious attackers. Therefore, under the social network environment, the research on the privacy protection of user information has great theoretical and practical significance.

Design/methodology/approach

In this study, based on the social network analysis, combined with the attribute reduction idea of rough set theory, the generalized reduction concept based on multi-level rough set from the perspectives of positive region, information entropy and knowledge granularity of rough set theory were proposed. Furthermore, it was traversed on the basis of the hierarchical compatible granularity space of the original information system and the corresponding attribute values are coarsened. The selected test data sets were tested, and the experimental results were analyzed.

Findings

The results showed that the algorithm can guarantee the anonymity requirement of data publishing and improve the effect of classification modeling on anonymous data in social network environment.

Research limitations/implications

In the test and verification of privacy protection algorithm and privacy protection scheme, the efficiency of algorithm and scheme needs to be tested on a larger data scale. However, the data in this study are not enough. In the following research, more data will be used for testing and verification.

Practical implications

In the context of social network, the hierarchical structure of data is introduced into rough set theory as domain knowledge by referring to human granulation cognitive mechanism, and rough set modeling for complex hierarchical data is studied for hierarchical data of decision table. The theoretical research results are applied to hierarchical decision rule mining and k-anonymous privacy protection data mining research, which enriches the connotation of rough set theory and has important theoretical and practical significance for further promoting the application of this theory. In addition, combined the theory of secure multi-party computing and the theory of attribute reduction in rough set, a privacy protection feature selection algorithm for multi-source decision table is proposed, which solves the privacy protection problem of feature selection in distributed environment. It provides a set of effective rough set feature selection method for privacy protection classification mining in distributed environment, which has practical application value for promoting the development of privacy protection data mining.

Originality/value

In this study, the proposed algorithm and scheme can effectively protect the privacy of social network data, ensure the availability of social network graph structure and realize the need of both protection and sharing of user attributes and relational data.

Details

Library Hi Tech, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0737-8831

Keywords

To view the access options for this content please click here
Book part
Publication date: 27 August 2016

James K. Galbraith, Jaehee Choi, Béatrice Halbach, Aleksandra Malinowska and Wenjie Zhang

We present a comparison of coverage and values for five inequality data sets that have worldwide or major international coverage and independent measurements that are…

Abstract

We present a comparison of coverage and values for five inequality data sets that have worldwide or major international coverage and independent measurements that are intended to present consistent coefficients that can be compared directly across countries and time. The comparison data sets are those published by the Luxembourg Income Studies (LIS), the OECD, the European Union’s Statistics on Incomes and Living Conditions (EU-SILC), and the World Bank’s World Development Indicators (WDI). The baseline comparison is with our own Estimated Household Income Inequality (EHII) data set of the University of Texas Inequality Project. The comparison shows the historical depth and range of EHII and its broad compatibility with LIS, OECD, and EU-SILC, as well as problems with using the WDI for any cross-country comparative purpose. The comparison excludes the large World Incomes Inequality Database (WIID) of UNU-WIDER and the Standardized World Income Inequality Database (SWIID) of Frederick Solt; the former is a bibliographic collection and the latter is based on imputations drawn, in part, from EHII and the other sources used here.

Details

Income Inequality Around the World
Type: Book
ISBN: 978-1-78560-943-5

Keywords

To view the access options for this content please click here

Abstract

Details

Machine Learning and Artificial Intelligence in Marketing and Sales
Type: Book
ISBN: 978-1-80043-881-1

1 – 10 of over 172000