Search results

1 – 10 of over 55000
Article
Publication date: 20 November 2017

Thushari Silva and Jian Ma

Expert profiling plays an important role in expert finding for collaborative innovation in research social networking platforms. Dynamic changes in scientific knowledge have posed…

1054

Abstract

Purpose

Expert profiling plays an important role in expert finding for collaborative innovation in research social networking platforms. Dynamic changes in scientific knowledge have posed significant challenges on expert profiling. Current approaches mostly rely on knowledge of other experts, contents of static web pages or their behavior and thus overlook the insight of big social data generated through crowdsourcing in research social networks and scientific data sources. In light of this deficiency, this research proposes a big data-based approach that harnesses collective intelligence of crowd in (research) social networking platforms and scientific databases for expert profiling.

Design/methodology/approach

A big data analytics approach which uses crowdsourcing is designed and developed for expert profiling. The proposed approach interconnects big data sources covering publication data, project data and data from social networks (i.e. posts, updates and endorsements collected through the crowdsourcing). Large volume of structured data representing scientific knowledge is available in Web of Science, Scopus, CNKI and ACM digital library; they are considered as publication data in this research context. Project data are located at the databases hosted by funding agencies. The authors follow the Map-Reduce strategy to extract real-time data from all these sources. Two main steps, features mining and profile consolidation (the details of which are outlined in the manuscript), are followed to generate comprehensive user profiles. The major tasks included in features mining are processing of big data sources to extract representational features of profiles, entity-profile generation and social-profile generation through crowd-opinion mining. At the profile consolidation, two profiles, namely, entity-profile and social-profile, are conflated.

Findings

(1) The integration of crowdsourcing techniques with big research data analytics has improved high graded relevance of the constructed profiles. (2) A system to construct experts’ profiles based on proposed methods has been incorporated into an operational system called ScholarMate (www.scholarmate.com).

Research limitations

One shortcoming is currently we have conducted experiments using sampling strategy. In the future we will perform controlled experiments of large scale and field tests to validate and comprehensively evaluate our design artifacts.

Practical implications

The business implication of this research work is that the developed methods and the system can be applied to streamline human capital management in organizations.

Originality/value

The proposed approach interconnects opinions of crowds on one’s expertise with corresponding expertise demonstrated in scientific knowledge bases to construct comprehensive profiles. This is a novel approach which alleviates problems associated with existing methods. The authors’ team has developed an expert profiling system operational in ScholarMate research social network (www.scholarmate.com), which is a professional research social network that connects people to research with the aim of “innovating smarter” and was launched in 2007.

Details

Information Discovery and Delivery, vol. 45 no. 4
Type: Research Article
ISSN: 2398-6247

Keywords

Article
Publication date: 19 February 2021

C. Lakshmi and K. Usha Rani

Resilient distributed processing technique (RDPT), in which mapper and reducer are simplified with the Spark contexts and support distributed parallel query processing.

Abstract

Purpose

Resilient distributed processing technique (RDPT), in which mapper and reducer are simplified with the Spark contexts and support distributed parallel query processing.

Design/methodology/approach

The proposed work is implemented with Pig Latin with Spark contexts to develop query processing in a distributed environment.

Findings

Query processing in Hadoop influences the distributed processing with the MapReduce model. MapReduce caters to the works on different nodes with the implementation of complex mappers and reducers. Its results are valid for some extent size of the data.

Originality/value

Pig supports the required parallel processing framework with the following constructs during the processing of queries: FOREACH; FLATTEN; COGROUP.

Details

International Journal of Intelligent Computing and Cybernetics, vol. 14 no. 2
Type: Research Article
ISSN: 1756-378X

Keywords

Book part
Publication date: 19 July 2022

Ayesha Banu

Introduction: The Internet has tremendously transformed the computer and networking world. Information reaches our fingertips and adds data to our repository within a second. Big…

Abstract

Introduction: The Internet has tremendously transformed the computer and networking world. Information reaches our fingertips and adds data to our repository within a second. Big data was initially defined as three Vs, where data come with greater variety, increasing volumes and extra velocity. Big data is a collection of structured, unstructured and semi-structured data gathered from different sources and applications. It has become the most powerful buzzword in almost all the business sectors. The real success of any industry can be counted based on how the big data is analysed, potential knowledge is discovered and productive business decisions are made. New technologies such as artificial intelligence and machine learning have added more efficiency to storing and analysing data. This big data analytics (BDA) becomes more valuable to those companies, focusing on getting insight into customer behaviour, trends and patterns. This popularity of big data has inspired insurance companies to utilise big data at their core systems and advance the financial operations, improve customer service, construct a personalised environment and take all possible measures to increase revenue and profits.

Purpose: This study aims to recognise what big data stands for in the insurance sector and how the application of BDA has opened the door for new and innovative changes in the insurance industry.

Methodology: This study describes the field of BDA in the insurance sector, discusses the benefits, outlines tools, architectural framework, the method, describes applications in general and specific and briefly discusses the opportunities and challenges.

Findings: The study concludes that BDA in insurance is evolving into a promising field for providing insight from very large data sets and improving outcomes while reducing costs. Its potential is great; however, there remain challenges to overcome.

Details

Big Data: A Game Changer for Insurance Industry
Type: Book
ISBN: 978-1-80262-606-3

Keywords

Article
Publication date: 12 March 2018

Ning Xian and Zhilong Chen

The purpose of this paper is to simplify the Explicit Nonlinear Model Predictive Controller (ENMPC) by linearizing the trajectory with Quantum-behaved Pigeon-Inspired Optimization…

Abstract

Purpose

The purpose of this paper is to simplify the Explicit Nonlinear Model Predictive Controller (ENMPC) by linearizing the trajectory with Quantum-behaved Pigeon-Inspired Optimization (QPIO).

Design/methodology/approach

The paper deduces the nonlinear model of the quadrotor and uses the ENMPC to track the trajectory. Since the ENMPC has high demand for the state equation, the trajectory needed to be differentiated many times. When the trajectory is complicate or discontinuous, QPIO is proposed to linearize the trajectory. Then the linearized trajectory will be used in the ENMPC.

Findings

Applying the QPIO algorithm allows the unequal distance sample points to be acquired to linearize the trajectory. Comparing with the equidistant linear interpolation, the linear interpolation error will be smaller.

Practical implications

Small-sized quadrotors were adopted in this research to simplify the model. The model is supposed to be accurate and differentiable to meet the requirements of ENMPC.

Originality/value

Traditionally, the quadrotor model was usually linearized in the research. In this paper, the quadrotor model was kept nonlinear and the trajectory will be linearized instead. Unequal distance sample points were utilized to linearize the trajectory. In this way, the authors can get a smaller interpolation error. This method can also be applied to discrete systems to construct the interpolation for trajectory tracking.

Details

International Journal of Intelligent Computing and Cybernetics, vol. 11 no. 1
Type: Research Article
ISSN: 1756-378X

Keywords

Open Access
Article
Publication date: 29 June 2020

Paolo Manghi, Claudio Atzori, Michele De Bonis and Alessia Bardi

Several online services offer functionalities to access information from “big research graphs” (e.g. Google Scholar, OpenAIRE, Microsoft Academic Graph), which correlate…

4533

Abstract

Purpose

Several online services offer functionalities to access information from “big research graphs” (e.g. Google Scholar, OpenAIRE, Microsoft Academic Graph), which correlate scholarly/scientific communication entities such as publications, authors, datasets, organizations, projects, funders, etc. Depending on the target users, access can vary from search and browse content to the consumption of statistics for monitoring and provision of feedback. Such graphs are populated over time as aggregations of multiple sources and therefore suffer from major entity-duplication problems. Although deduplication of graphs is a known and actual problem, existing solutions are dedicated to specific scenarios, operate on flat collections, local topology-drive challenges and cannot therefore be re-used in other contexts.

Design/methodology/approach

This work presents GDup, an integrated, scalable, general-purpose system that can be customized to address deduplication over arbitrary large information graphs. The paper presents its high-level architecture, its implementation as a service used within the OpenAIRE infrastructure system and reports numbers of real-case experiments.

Findings

GDup provides the functionalities required to deliver a fully-fledged entity deduplication workflow over a generic input graph. The system offers out-of-the-box Ground Truth management, acquisition of feedback from data curators and algorithms for identifying and merging duplicates, to obtain an output disambiguated graph.

Originality/value

To our knowledge GDup is the only system in the literature that offers an integrated and general-purpose solution for the deduplication graphs, while targeting big data scalability issues. GDup is today one of the key modules of the OpenAIRE infrastructure production system, which monitors Open Science trends on behalf of the European Commission, National funders and institutions.

Details

Data Technologies and Applications, vol. 54 no. 4
Type: Research Article
ISSN: 2514-9288

Keywords

Content available
Book part
Publication date: 18 July 2022

Abstract

Details

Big Data Analytics in the Insurance Market
Type: Book
ISBN: 978-1-80262-638-4

Book part
Publication date: 18 July 2022

Manish Bhardwaj and Shivani Agarwal

Introduction: In recent years, fresh big data ideas and concepts have emerged to address the massive increase in data volumes in several commercial areas. Meanwhile, the…

Abstract

Introduction: In recent years, fresh big data ideas and concepts have emerged to address the massive increase in data volumes in several commercial areas. Meanwhile, the phenomenal development of internet use and social media has not only added to the enormous volumes of data available but has also posed new hurdles to traditional data processing methods. For example, the insurance industry is known for being data-driven, as it generates massive volumes of accumulated material, both structured and unstructured, that typical data processing techniques can’t handle.

Purpose: In this study, the authors compare the benefits of big data technologies to the needs for insurance data processing and decision-making. There is also a case study evaluation concentrating on the primary use cases of big data in the insurance business.

Methodology: This chapter examines the essential big data technologies and tools from the insurance industry’s perspective. The study also included an analytical analysis that supported several gains made by insurance companies, such as more efficient processing of large, heterogeneous data sets or better decision-making support. In addition, the study examines in depth the top seven use cases of big data in insurance and justifying their use and adding value. Finally, it also reviewed contemporary big data technologies and tools, concentrating on their key concepts and recommended applications in the insurance business through examples.

Findings: The study has demonstrated the value of implementing big data technologies and tools, which enable the development of powerful new business models, allowing insurance to advance from ‘understand and protect’ to ‘predict and prevent’.

Details

Big Data Analytics in the Insurance Market
Type: Book
ISBN: 978-1-80262-638-4

Keywords

Article
Publication date: 10 May 2013

Peter Filipp Fuchs, Klaus Fellner and Gerald Pinter

The purpose of this paper is to analyse, in a finite element simulation, the failure of a multilayer printed circuit board (PCB), exposed to an impact load, to better evaluate the…

Abstract

Purpose

The purpose of this paper is to analyse, in a finite element simulation, the failure of a multilayer printed circuit board (PCB), exposed to an impact load, to better evaluate the reliability and lifetime. Thereby the focus was set on failures in the outermost epoxy layer.

Design/methodology/approach

The fracture behaviour of the affected material was characterized. The parameters of a cohesive zone law were determined by performing a double cantilever beam test and a corresponding simulation. The cohesive zone law was used in an enriched finite element local simulation model to predict the crack initiation and crack propagation. Using the determined location of the initial crack, the energy release rate at the crack tip was calculated, allowing an evaluation of the local loading situation.

Findings

A good concurrence between the simulated and the experimentally observed failure pattern was observed. Calculating the energy release rate of two example PCBs, the significant influence of the chosen type on the local failure behaviour was proven.

Originality/value

The work presented in this paper allows for the simulation and evaluation of failure in the outermost epoxy layers of printed circuit boards due to impact loads.

Content available
Book part
Publication date: 19 July 2022

Abstract

Details

Big Data: A Game Changer for Insurance Industry
Type: Book
ISBN: 978-1-80262-606-3

Book part
Publication date: 18 July 2022

Teena Pareek, Kiran Sood and Simon Grima

Introduction: New ideas and concepts of big data have emerged in recent years in response to the astounding growth of data in many industries. Furthermore, the phenomenal increase…

Abstract

Introduction: New ideas and concepts of big data have emerged in recent years in response to the astounding growth of data in many industries. Furthermore, the phenomenal increase in the use of the internet and social media has added enormous amounts of data to conventional data processing systems. Still, it has also created challenges for traditional data processing.

Purpose: A significant characteristic of the insurance sector is critically dependent on information. This sector generates a great deal of structured and unstructured data, which traditional data processing techniques cannot handle. As compared to conventional insurance data processing and decision-making requirements, this lesson shows an analysis of data technology’s value additions.

Research methodology: The author assesses the primary use of cases for data in the insurance industry via a case study analysis. From the perspective of the insurance sector, this chapter examines the concepts, technologies, and tools of big data. A few analytical reviews by the insurance company are also provided, which justified several gains gained either through inefficient processing of massive, diverse data sets or by supporting better decisions.

Findings: This chapter demonstrates the importance of adopting new business models that allow insurers to move beyond understand and protect and become more predictive and preventative by using the tools and technologies of big data technology.

Details

Big Data Analytics in the Insurance Market
Type: Book
ISBN: 978-1-80262-638-4

Keywords

1 – 10 of over 55000