Search results
1 – 10 of over 13000Shanyong Wang, Jun Li and Dingtao Zhao
The purpose of this paper is to apply an extended technology acceptance model to examine the medical data analyst’s intention to use medical big data processing technique.
Abstract
Purpose
The purpose of this paper is to apply an extended technology acceptance model to examine the medical data analyst’s intention to use medical big data processing technique.
Design/methodology/approach
Questionnaire survey method was used to collect data from 293 medical data analysts and analyzed with the assistance of structural equation modeling.
Findings
The results indicate that the perceived usefulness, social influence and attitude are important to the intention to use medical big data processing technique, and the direct effect of perceived usefulness on intention to use is greater than social influence and attitude. The perceived usefulness is influenced by perceived ease of use. Attitude is influenced by perceived usefulness, and attitude acts as a mediator between perceived usefulness and usage intention. Unexpectedly, attitude is not influenced by perceived ease of use and social influence.
Originality/value
This research examines the medical data analyst’s intention to use medical big data processing technique and provides several implications for using medical big data processing technique.
Details
Keywords
Mahmoud El Samad, Sam El Nemar, Georgia Sakka and Hani El-Chaarani
The purpose of this paper is to propose a new conceptual framework for big data analytics (BDA) in the healthcare sector for the European Mediterranean region. The objective of…
Abstract
Purpose
The purpose of this paper is to propose a new conceptual framework for big data analytics (BDA) in the healthcare sector for the European Mediterranean region. The objective of this new conceptual framework is to improve the health conditions in a dynamic region characterized by the appearance of new diseases.
Design/methodology/approach
This study presents a new conceptual framework that could be employed in the European Mediterranean healthcare sector. Practically, this study can enhance medical services, taking smart decisions based on accurate data for healthcare and, finally, reducing the medical treatment costs, thanks to data quality control.
Findings
This research proposes a new conceptual framework for BDA in the healthcare sector that could be integrated in the European Mediterranean region. This framework introduces the big data quality (BDQ) module to filter and clean data that are provided from different European data sources. The BDQ module acts in a loop mode where bad data are redirected to their data source (e.g. European Centre for Disease Prevention and Control, university hospitals) to be corrected to improve the overall data quality in the proposed framework. Finally, clean data are directed to the BDA to take quick efficient decisions involving all the concerned stakeholders.
Practical implications
This study proposes a new conceptual framework for executives in the healthcare sector to improve the decision-making process, decrease operational costs, enhance management performance and save human lives.
Originality/value
This study focused on big data management and BDQ in the European Mediterranean healthcare sector as a broadly considered fundamental condition for the quality of medical services and conditions.
Details
Keywords
Sumeer Gul, Shohar Bano and Taseen Shah
Data mining along with its varied technologies like numerical mining, textual mining, multimedia mining, web mining, sentiment analysis and big data mining proves itself as an…
Abstract
Purpose
Data mining along with its varied technologies like numerical mining, textual mining, multimedia mining, web mining, sentiment analysis and big data mining proves itself as an emerging field and manifests itself in the form of different techniques such as information mining; big data mining; big data mining and Internet of Things (IoT); and educational data mining. This paper aims to discuss how these technologies and techniques are used to derive information and, eventually, knowledge from data.
Design/methodology/approach
An extensive review of literature on data mining and its allied techniques was carried to ascertain the emerging procedures and techniques in the domain of data mining. Clarivate Analytic’s Web of Science and Sciverse Scopus were explored to discover the extent of literature published on Data Mining and its varied facets. Literature was searched against various keywords such as data mining; information mining; big data; big data and IoT; and educational data mining. Further, the works citing the literature on data mining were also explored to visualize a broad gamut of emerging techniques about this growing field.
Findings
The study validates that knowledge discovery in databases has rendered data mining as an emerging field; the data present in these databases paves the way for data mining techniques and analytics. This paper provides a unique view about the usage of data, and logical patterns derived from it, how new procedures, algorithms and mining techniques are being continuously upgraded for their multipurpose use for the betterment of human life and experiences.
Practical implications
The paper highlights different aspects of data mining, its different technological approaches, and how these emerging data technologies are used to derive logical insights from data and make data more meaningful.
Originality/value
The paper tries to highlight the current trends and facets of data mining.
Details
Keywords
The chapter deliberates on research ethics and the unanticipated side effects that technological developments have brought in the past decades. It looks at data protection and…
Abstract
The chapter deliberates on research ethics and the unanticipated side effects that technological developments have brought in the past decades. It looks at data protection and privacy through the prism of ethics and focuses on the need for safeguarding the fundamental rights of the research participants in the new digital era. Acknowledging the benefits of data analytics for boosting scientific process, the chapter reflects on the main principles and specific research derogations, introduced by the EU General Data Protection Regulation. Further on, it discusses some of the most pressing ethics concerns, related to the use, reuse, and misuse of data; the distinction between publicly available and open data; ethics challenges in online recruitment of research participants; and the potential bias and representativeness problems of Big Data research. The chapter underscores that all challenges should be properly addressed at the outset of research design. Highlighting the power asymmetries between Big Data studies and individuals’ rights to data protection, human dignity, and respect for private and family life, the chapter argues that anonymization may be reasonable, yet not the ultimate ethics solution. It asserts that while anonymization techniques may protect individual data protection rights, the former may not be sufficient to prevent discrimination and stigmatization of entire groups of populations. Finally, the chapter suggests some approaches for ensuring ethics compliance in the digital era.
Details
Keywords
Elham Ali Shammar and Ammar Thabit Zahary
Internet has changed radically in the way people interact in the virtual world, in their careers or social relationships. IoT technology has added a new vision to this process by…
Abstract
Purpose
Internet has changed radically in the way people interact in the virtual world, in their careers or social relationships. IoT technology has added a new vision to this process by enabling connections between smart objects and humans, and also between smart objects themselves, which leads to anything, anytime, anywhere, and any media communications. IoT allows objects to physically see, hear, think, and perform tasks by making them talk to each other, share information and coordinate decisions. To enable the vision of IoT, it utilizes technologies such as ubiquitous computing, context awareness, RFID, WSN, embedded devices, CPS, communication technologies, and internet protocols. IoT is considered to be the future internet, which is significantly different from the Internet we use today. The purpose of this paper is to provide up-to-date literature on trends of IoT research which is driven by the need for convergence of several interdisciplinary technologies and new applications.
Design/methodology/approach
A comprehensive IoT literature review has been performed in this paper as a survey. The survey starts by providing an overview of IoT concepts, visions and evolutions. IoT architectures are also explored. Then, the most important components of IoT are discussed including a thorough discussion of IoT operating systems such as Tiny OS, Contiki OS, FreeRTOS, and RIOT. A review of IoT applications is also presented in this paper and finally, IoT challenges that can be recently encountered by researchers are introduced.
Findings
Studies of IoT literature and projects show the disproportionate importance of technology in IoT projects, which are often driven by technological interventions rather than innovation in the business model. There are a number of serious concerns about the dangers of IoT growth, particularly in the areas of privacy and security; hence, industry and government began addressing these concerns. At the end, what makes IoT exciting is that we do not yet know the exact use cases which would have the ability to significantly influence our lives.
Originality/value
This survey provides a comprehensive literature review on IoT techniques, operating systems and trends.
Details
Keywords
Tharushi Sandunika Ilangakoon, Samanthi Kumari Weerabahu, Premaratne Samaranayake and Ruwan Wickramarachchi
This paper proposes the adoption of Industry 4.0 (I4) technologies and lean techniques for improving operational performance in the healthcare sector.
Abstract
Purpose
This paper proposes the adoption of Industry 4.0 (I4) technologies and lean techniques for improving operational performance in the healthcare sector.
Design/methodology/approach
The research adopted a systematic literature review and feedback of healthcare professionals to identify the inefficiencies in the current healthcare system. A questionnaire was used to get feedback from the patients and the hospital staff about the current practices and issues, and the expected impact of technology on existing practices. Data were analysed using descriptive statistics, correlation analysis and multiple regression analysis.
Findings
The results indicate that I4 technologies lead to the improvement of the operational performance, and the perceptions about I4 technologies are made through the pre-medical diagnosis. However, a weak correlation between lean practices and healthcare operational performance compared to that of I4 technologies and operational performance indicate that lean practices are not fully implemented in the Sri Lankan healthcare sector to their full potential.
Research limitations/implications
This study is limited to two government hospitals, with insights from only the doctors and nurses in Sri Lanka. Furthermore, the study is limited to only selected aspects of I4 technologies (big data, cloud computing and IoT) and lean concepts (value stream mapping and 5S). Therefore, recommendations on the adoption of I4 technologies in the healthcare sector need to be made within the scope of the study investigation.
Practical implications
The implementation of I4 technologies needs careful consideration of process improvement as part of the overall plan for achieving the maximum benefits of technology adoption.
Originality/value
The findings of the research can be used as a benchmark/guide for other hospitals to explore the adoption of I4 technologies, and how process improvement from lean concepts could influence the overall operational performance.
Details
Keywords
This work can be used as a building block in other settings such as GPU, Map-Reduce, Spark or any other. Also, DDPML can be deployed on other distributed systems such as P2P…
Abstract
Purpose
This work can be used as a building block in other settings such as GPU, Map-Reduce, Spark or any other. Also, DDPML can be deployed on other distributed systems such as P2P networks, clusters, clouds computing or other technologies.
Design/methodology/approach
In the age of Big Data, all companies want to benefit from large amounts of data. These data can help them understand their internal and external environment and anticipate associated phenomena, as the data turn into knowledge that can be used for prediction later. Thus, this knowledge becomes a great asset in companies' hands. This is precisely the objective of data mining. But with the production of a large amount of data and knowledge at a faster pace, the authors are now talking about Big Data mining. For this reason, the authors’ proposed works mainly aim at solving the problem of volume, veracity, validity and velocity when classifying Big Data using distributed and parallel processing techniques. So, the problem that the authors are raising in this work is how the authors can make machine learning algorithms work in a distributed and parallel way at the same time without losing the accuracy of classification results. To solve this problem, the authors propose a system called Dynamic Distributed and Parallel Machine Learning (DDPML) algorithms. To build it, the authors divided their work into two parts. In the first, the authors propose a distributed architecture that is controlled by Map-Reduce algorithm which in turn depends on random sampling technique. So, the distributed architecture that the authors designed is specially directed to handle big data processing that operates in a coherent and efficient manner with the sampling strategy proposed in this work. This architecture also helps the authors to actually verify the classification results obtained using the representative learning base (RLB). In the second part, the authors have extracted the representative learning base by sampling at two levels using the stratified random sampling method. This sampling method is also applied to extract the shared learning base (SLB) and the partial learning base for the first level (PLBL1) and the partial learning base for the second level (PLBL2). The experimental results show the efficiency of our solution that the authors provided without significant loss of the classification results. Thus, in practical terms, the system DDPML is generally dedicated to big data mining processing, and works effectively in distributed systems with a simple structure, such as client-server networks.
Findings
The authors got very satisfactory classification results.
Originality/value
DDPML system is specially designed to smoothly handle big data mining classification.
Details
Keywords
With the advent of Big Data, the ability to store and use the unprecedented amount of clinical information is now feasible via Electronic Health Records (EHRs). The massive…
Abstract
With the advent of Big Data, the ability to store and use the unprecedented amount of clinical information is now feasible via Electronic Health Records (EHRs). The massive collection of clinical data by health care systems and treatment canters can be productively used to perform predictive analytics on treatment plans to improve patient health outcomes. These massive data sets have stimulated opportunities to adapt computational algorithms to track and identify target areas for quality improvement in health care.
According to a report from Association of American Medical Colleges, there will be an alarming gap between demand and supply of health care work force in near future. The projections show that, by 2032 there is will be a shortfall of between 46,900 and 121,900 physicians in US (AAMC, 2019). Therefore, early prediction of health care risks is a demanding requirement to improve health care quality and reduce health care costs. Predictive analytics uses historical data and algorithms based on either statistics or machine learning to develop predictive models that capture important trends. These models have the ability to predict the likelihood of the future events. Predictive models developed using supervised machine learning approaches are commonly applied for various health care problems such as disease diagnosis, treatment selection, and treatment personalization.
This chapter provides an overview of various machine learning and statistical techniques for developing predictive models. Case examples from the extant literature are provided to illustrate the role of predictive modeling in health care research. Together with adaptation of these predictive modeling techniques with Big Data analytics underscores the need for standardization and transparency while recognizing the opportunities and challenges ahead.
Details
Keywords
K. Kalaiselvi and A. Thirumurthi Raja
Big Data is one of the most promising area where it can be applied to make a change is health care. Healthcare analytics have the potential to reduce the treatment costs, forecast…
Abstract
Big Data is one of the most promising area where it can be applied to make a change is health care. Healthcare analytics have the potential to reduce the treatment costs, forecast outbreaks of epidemics, avoid preventable diseases, and improve the quality of life. In general, the lifetime of human is increasing along world population, which poses new experiments to today’s treatment delivery methods. Health professionals are skillful of gathering enormous volumes of data and look for best approaches to use these numbers. Big data analytics has helped the healthcare area by providing personalized medicine and prescriptive analytics, medical risk interference and predictive analytics, computerized external and internal reporting of patient data, homogeneous medical terms and patient registries, and fragmented point solutions. The data generated level within healthcare systems is significant. This includes electronic health record data, imaging data, patient-generated data, etc. While widespread information in health care is now mostly electronic and fits under the big data as most is unstructured and difficult to use. The use of big data in health care has raised substantial ethical challenges ranging from risks for specific rights, privacy and autonomy, to transparency and trust.
Details