Search results

1 – 10 of over 1000
Open Access
Article
Publication date: 6 March 2017

Zhuoxuan Jiang, Chunyan Miao and Xiaoming Li

Recent years have witnessed the rapid development of massive open online courses (MOOCs). With more and more courses being produced by instructors and being participated by…

2131

Abstract

Purpose

Recent years have witnessed the rapid development of massive open online courses (MOOCs). With more and more courses being produced by instructors and being participated by learners all over the world, unprecedented massive educational resources are aggregated. The educational resources include videos, subtitles, lecture notes, quizzes, etc., on the teaching side, and forum contents, Wiki, log of learning behavior, log of homework, etc., on the learning side. However, the data are both unstructured and diverse. To facilitate knowledge management and mining on MOOCs, extracting keywords from the resources is important. This paper aims to adapt the state-of-the-art techniques to MOOC settings and evaluate the effectiveness on real data. In terms of practice, this paper also tries to answer the questions for the first time that to what extend can the MOOC resources support keyword extraction models, and how many human efforts are required to make the models work well.

Design/methodology/approach

Based on which side generates the data, i.e instructors or learners, the data are classified to teaching resources and learning resources, respectively. The approach used on teaching resources is based on machine learning models with labels, while the approach used on learning resources is based on graph model without labels.

Findings

From the teaching resources, the methods used by the authors can accurately extract keywords with only 10 per cent labeled data. The authors find a characteristic of the data that the resources of various forms, e.g. subtitles and PPTs, should be separately considered because they have the different model ability. From the learning resources, the keywords extracted from MOOC forums are not as domain-specific as those extracted from teaching resources, but they can reflect the topics which are lively discussed in forums. Then instructors can get feedback from the indication. The authors implement two applications with the extracted keywords: generating concept map and generating learning path. The visual demos show they have the potential to improve learning efficiency when they are integrated into a real MOOC platform.

Research limitations/implications

Conducting keyword extraction on MOOC resources is quite difficult because teaching resources are hard to be obtained due to copyrights. Also, getting labeled data is tough because usually expertise of the corresponding domain is required.

Practical implications

The experiment results support that MOOC resources are good enough for building models of keyword extraction, and an acceptable balance between human efforts and model accuracy can be achieved.

Originality/value

This paper presents a pioneer study on keyword extraction on MOOC resources and obtains some new findings.

Details

International Journal of Crowd Science, vol. 1 no. 1
Type: Research Article
ISSN: 2398-7294

Keywords

Open Access
Article
Publication date: 12 April 2021

Björn Hammarfelt

In this article, the ideas and methods behind the “patent-paper citation” are scrutinised by following the intellectual and technical development of approaches and ideas in early…

2696

Abstract

Purpose

In this article, the ideas and methods behind the “patent-paper citation” are scrutinised by following the intellectual and technical development of approaches and ideas in early work on patentometrics. The aim is to study how references from patents to papers came to play a crucial role in establishing a link between science and technology.

Design/methodology/approach

The study comprises a conceptual history of the “patent paper citation” and its emergence as an important indicator of science and technology interaction. By tracing key references in the field, it analyses the overarching frameworks and ideas, the conceptual “hinterland”, in which the approach of studying patent references emerged.

Findings

The analysis explains how interest in patents – not only as legal and economic artefacts but also as scientific documents – became evident in the 1980s. The focus on patent citations was sparked by a need for relevant and objective indicators and by the greater availability of databases and methods. Yet, the development of patentometrics also relied on earlier research, and established theories, on the relation between science and technology.

Originality/value

This is the first attempt at situating patentometrics in a larger societal and scientific context. The paper offers a reflexive and nuanced analysis of the “patent-paper citation” as a theoretical and historical construct, and it calls for a broader and contextualised understanding of patent references, including their social, legal and rhetorical function.

Details

Journal of Documentation, vol. 77 no. 6
Type: Research Article
ISSN: 0022-0418

Keywords

Open Access
Article
Publication date: 2 November 2020

Carlo Giua, Valentina Cristiana Materia and Luca Camanzi

This paper reviews the academic contributions that have emerged to date on the broad definition of farm-level management information systems (MISs). The purpose is twofold: (1) to…

5707

Abstract

Purpose

This paper reviews the academic contributions that have emerged to date on the broad definition of farm-level management information systems (MISs). The purpose is twofold: (1) to identify the theories used in the literature to study the adoption of digital technologies and (2) to identify the drivers of and barriers to the adoption of such technologies.

Design/methodology/approach

The literature review was based on a comprehensive review of contributions published in the 1998–2019 period. The search was both automated and manual, browsing through references of works previously found via high-quality digital libraries.

Findings

Diffusion of innovations (DOIs) is the most frequently used theoretical framework in the literature reviewed, though it is often combined with other innovation adoption theories. In addition, farms’ and farmers’ traits, together with technological features, play a key role in explaining the adoption of these technologies.

Research limitations/implications

So far, research has positioned the determinants of digital technology adoption mainly within the boundaries of the farm.

Practical implications

On the practical level, the extensive determinants’ review has potential to serve the aim of policymakers and technology industries, to clearly and thoroughly understand adoption dynamics and elaborate specific strategies to deal with them.

Originality/value

This study’s contribution to the existing body of knowledge on the farm-level adoption of digital technologies is twofold: (1) it combines smart farming and existing technologies within the same category of farm-level MIS and (2) it extends the analysis to studies which not only focus directly on adoption but also on software architecture design and development.

Open Access
Article
Publication date: 19 August 2021

Linh Truong-Hong, Roderik Lindenbergh and Thu Anh Nguyen

Terrestrial laser scanning (TLS) point clouds have been widely used in deformation measurement for structures. However, reliability and accuracy of resulting deformation…

2378

Abstract

Purpose

Terrestrial laser scanning (TLS) point clouds have been widely used in deformation measurement for structures. However, reliability and accuracy of resulting deformation estimation strongly depends on quality of each step of a workflow, which are not fully addressed. This study aims to give insight error of these steps, and results of the study would be guidelines for a practical community to either develop a new workflow or refine an existing one of deformation estimation based on TLS point clouds. Thus, the main contributions of the paper are investigating point cloud registration error affecting resulting deformation estimation, identifying an appropriate segmentation method used to extract data points of a deformed surface, investigating a methodology to determine an un-deformed or a reference surface for estimating deformation, and proposing a methodology to minimize the impact of outlier, noisy data and/or mixed pixels on deformation estimation.

Design/methodology/approach

In practice, the quality of data point clouds and of surface extraction strongly impacts on resulting deformation estimation based on laser scanning point clouds, which can cause an incorrect decision on the state of the structure if uncertainty is available. In an effort to have more comprehensive insight into those impacts, this study addresses four issues: data errors due to data registration from multiple scanning stations (Issue 1), methods used to extract point clouds of structure surfaces (Issue 2), selection of the reference surface Sref to measure deformation (Issue 3), and available outlier and/or mixed pixels (Issue 4). This investigation demonstrates through estimating deformation of the bridge abutment, building and an oil storage tank.

Findings

The study shows that both random sample consensus (RANSAC) and region growing–based methods [a cell-based/voxel-based region growing (CRG/VRG)] can be extracted data points of surfaces, but RANSAC is only applicable for a primary primitive surface (e.g. a plane in this study) subjected to a small deformation (case study 2 and 3) and cannot eliminate mixed pixels. On another hand, CRG and VRG impose a suitable method applied for deformed, free-form surfaces. In addition, in practice, a reference surface of a structure is mostly not available. The use of a fitting plane based on a point cloud of a current surface would cause unrealistic and inaccurate deformation because outlier data points and data points of damaged areas affect an accuracy of the fitting plane. This study would recommend the use of a reference surface determined based on a design concept/specification. A smoothing method with a spatial interval can be effectively minimize, negative impact of outlier, noisy data and/or mixed pixels on deformation estimation.

Research limitations/implications

Due to difficulty in logistics, an independent measurement cannot be established to assess the deformation accuracy based on TLS data point cloud in the case studies of this research. However, common laser scanners using the time-of-flight or phase-shift principle provide point clouds with accuracy in the order of 1–6 mm, while the point clouds of triangulation scanners have sub-millimetre accuracy.

Practical implications

This study aims to give insight error of these steps, and the results of the study would be guidelines for a practical community to either develop a new workflow or refine an existing one of deformation estimation based on TLS point clouds.

Social implications

The results of this study would provide guidelines for a practical community to either develop a new workflow or refine an existing one of deformation estimation based on TLS point clouds. A low-cost method can be applied for deformation analysis of the structure.

Originality/value

Although a large amount of the studies used laser scanning to measure structure deformation in the last two decades, the methods mainly applied were to measure change between two states (or epochs) of the structure surface and focused on quantifying deformation-based TLS point clouds. Those studies proved that a laser scanner could be an alternative unit to acquire spatial information for deformation monitoring. However, there are still challenges in establishing an appropriate procedure to collect a high quality of point clouds and develop methods to interpret the point clouds to obtain reliable and accurate deformation, when uncertainty, including data quality and reference information, is available. Therefore, this study demonstrates the impact of data quality in a term of point cloud registration error, selected methods for extracting point clouds of surfaces, identifying reference information, and available outlier, noisy data and/or mixed pixels on deformation estimation.

Details

International Journal of Building Pathology and Adaptation, vol. 40 no. 3
Type: Research Article
ISSN: 2398-4708

Keywords

Open Access
Article
Publication date: 12 June 2017

Lichao Zhu, Hangzhou Yang and Zhijun Yan

The purpose of this paper is to develop a new method to extract medical temporal information from online health communities.

Abstract

Purpose

The purpose of this paper is to develop a new method to extract medical temporal information from online health communities.

Design/methodology/approach

The authors trained a conditional random-filed model for the extraction of temporal expressions. The temporal relation identification is considered as a classification task and several support vector machine classifiers are built in the proposed method. For the model training, the authors extracted some high-level semantic features including co-reference relationship of medical concepts and the semantic similarity among words.

Findings

For the extraction of TIMEX, the authors find that well-formatted expressions are easy to recognize, and the main challenge is the relative TIMEX such as “three days after onset”. It also shows the same difficulty for normalization of absolute date or well-formatted duration, whereas frequency is easier to be normalized. For the identification of DocTimeRel, the result is fairly well, and the relation is difficult to identify when it involves a relative TIMEX or a hypothetical concept.

Originality/value

The authors proposed a new method to extract temporal information from the online clinical data and evaluated the usefulness of different level of syntactic features in this task.

Details

International Journal of Crowd Science, vol. 1 no. 2
Type: Research Article
ISSN: 2398-7294

Keywords

Open Access
Article
Publication date: 21 July 2020

Prajowal Manandhar, Prashanth Reddy Marpu and Zeyar Aung

We make use of the Volunteered Geographic Information (VGI) data to extract the total extent of the roads using remote sensing images. VGI data is often provided only as vector…

1274

Abstract

We make use of the Volunteered Geographic Information (VGI) data to extract the total extent of the roads using remote sensing images. VGI data is often provided only as vector data represented by lines and not as full extent. Also, high geolocation accuracy is not guaranteed and it is common to observe misalignment with the target road segments by several pixels on the images. In this work, we use the prior information provided by the VGI and extract the full road extent even if there is significant mis-registration between the VGI and the image. The method consists of image segmentation and traversal of multiple agents along available VGI information. First, we perform image segmentation, and then we traverse through the fragmented road segments using autonomous agents to obtain a complete road map in a semi-automatic way once the seed-points are defined. The road center-line in the VGI guides the process and allows us to discover and extract the full extent of the road network based on the image data. The results demonstrate the validity and good performance of the proposed method for road extraction that reflects the actual road width despite the presence of disturbances such as shadows, cars and trees which shows the efficiency of the fusion of the VGI and satellite images.

Details

Applied Computing and Informatics, vol. 17 no. 1
Type: Research Article
ISSN: 2634-1964

Keywords

Open Access
Article
Publication date: 11 October 2023

Bachriah Fatwa Dhini, Abba Suganda Girsang, Unggul Utan Sufandi and Heny Kurniawati

The authors constructed an automatic essay scoring (AES) model in a discussion forum where the result was compared with scores given by human evaluators. This research proposes…

Abstract

Purpose

The authors constructed an automatic essay scoring (AES) model in a discussion forum where the result was compared with scores given by human evaluators. This research proposes essay scoring, which is conducted through two parameters, semantic and keyword similarities, using a SentenceTransformers pre-trained model that can construct the highest vector embedding. Combining these models is used to optimize the model with increasing accuracy.

Design/methodology/approach

The development of the model in the study is divided into seven stages: (1) data collection, (2) pre-processing data, (3) selected pre-trained SentenceTransformers model, (4) semantic similarity (sentence pair), (5) keyword similarity, (6) calculate final score and (7) evaluating model.

Findings

The multilingual paraphrase-multilingual-MiniLM-L12-v2 and distilbert-base-multilingual-cased-v1 models got the highest scores from comparisons of 11 pre-trained multilingual models of SentenceTransformers with Indonesian data (Dhini and Girsang, 2023). Both multilingual models were adopted in this study. A combination of two parameters is obtained by comparing the response of the keyword extraction responses with the rubric keywords. Based on the experimental results, proposing a combination can increase the evaluation results by 0.2.

Originality/value

This study uses discussion forum data from the general biology course in online learning at the open university for the 2020.2 and 2021.2 semesters. Forum discussion ratings are still manual. In this survey, the authors created a model that automatically calculates the value of discussion forums, which are essays based on the lecturer's answers moreover rubrics.

Details

Asian Association of Open Universities Journal, vol. 18 no. 3
Type: Research Article
ISSN: 1858-3431

Keywords

Content available
Book part
Publication date: 13 December 2017

Internet + and Electronic Business in China is a comprehensive resource that provides insight and analysis into E-commerce in China and how it has revolutionized and continues to…

Abstract

Internet + and Electronic Business in China is a comprehensive resource that provides insight and analysis into E-commerce in China and how it has revolutionized and continues to revolutionize business and society. Split into four distinct sections, the book first lays out the theoretical foundations and fundamental concepts of E-Business before moving on to look at internet+ innovation models and their applications in different industries such as agriculture, finance and commerce. The book then provides a comprehensive analysis of E-business platforms and their applications in China before finishing with four comprehensive case studies of major E-business projects, providing readers with successful examples of implementing E-Business entrepreneurship projects.

Internet + and Electronic Business in China is a comprehensive resource that provides insights and analysis into how E-commerce has revolutionized and continues to revolutionize business and society in China.

Details

Internet+ and Electronic Business in China: Innovation and Applications
Type: Book
ISBN: 978-1-78743-115-7

Open Access
Article
Publication date: 13 December 2022

Chau Thi Ngoc Pham, Hung Ngoc Phan, Thao Thanh Hoang, Tien Thi Thuy Dao and Huong Mai Bui

The health and environmental hazards associated with synthetic dyes have led to a revival of natural dyes that are non-toxic, environmentally benign and coupled with various…

1238

Abstract

Purpose

The health and environmental hazards associated with synthetic dyes have led to a revival of natural dyes that are non-toxic, environmentally benign and coupled with various functions. The study aims to investigate and develop the potentiality of a popular herb called Chromolaena odorata (C. odorata) as a sustainable and stable dyestuff in textiles.

Design/methodology/approach

Natural colorant extracted from C. odorata leaves is used to dye the worsted fabric, which is one of the premier end-use of wool in fashion, via the padding method associated with pre-, simultaneous and post-mordanting with chitosan, tannic acid and copper sulfate pentahydrate. The effects of extraction, dyeing and mordanting processes on fabric’s color strength K/S and color difference ΔECMC are investigated via International Commission on Illumination’s L*a*b* color space, Fourier transform infrared spectroscopy, scanning electron microscope, color fastness to washing, rubbing, perspiration and light.

Findings

The results obtained indicate extraction with ethanol 90% with a solid/liquid ratio of 1:5 within 1 h, and coloration with a liquor ratio of 1:5 (pH 5) within 2 h under padding pressure of 0.3 MPa are the most effective for coloring worsted fabric.

Practical implications

The C. odorata’s application as a highly effective dyestuff possessing good colorimetric effectiveness has expanded this herb's economic potential, contributing partly to economic growth and adding value to wool in global supply chain.

Originality/value

C. odorata dyestuff has prevailed over other natural colorants because of its impressive color fastness against washing, rubbing, perspiration and especially color stability for pH change.

Details

Research Journal of Textile and Apparel, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 1560-6074

Keywords

Content available
Article
Publication date: 12 April 2022

Subhamoy Dhua, Kshitiz Kumar, Vijay Singh Sharanagat and Prabhat K. Nema

The amount of food wasted every year is 1.3 billion metric tonne (MT), out of which 0.5 billion MT is contributed by the fruits processing industries. The waste includes…

1274

Abstract

Purpose

The amount of food wasted every year is 1.3 billion metric tonne (MT), out of which 0.5 billion MT is contributed by the fruits processing industries. The waste includes by-products such as peels, pomace and seeds and is a good source of bioactive compounds like phenolic compounds, flavonoids, pectin lipids and dietary fibres. Hence, the purpose of the present study is to review the novel extraction techniques used for the extraction of the bio active compounds from food waste for the selection of suitable extraction method.

Design/methodology/approach

Novel extraction techniques such as ultrasound-assisted extraction, microwave-assisted extraction, enzyme-assisted extraction, supercritical fluid extraction, pulsed electric field extraction and pressurized liquid extraction have emerged to overcome the drawbacks and constraints of conventional extraction techniques. Hence, this study is focussed on novel extraction techniques, their limitations and optimization for the extraction of bioactive compounds from fruit and vegetable waste.

Findings

This study presents a comprehensive review on the novel extraction processes that have been adopted for the extraction of bioactive compounds from food waste. This paper also summarizes bioactive compounds' optimum extraction condition from various food waste using novel extraction techniques.

Research limitations/implications

Food waste is rich in bioactive compounds, and its efficient extraction may add value to the food processing industries. Hence, compressive analysis is needed to overcome the problem associated with the extraction and selection of suitable extraction techniques.

Social implications

Selection of a suitable extraction method will not only add value to food waste but also reduce waste dumping and the cost of bioactive compounds.

Originality/value

This paper presents the research progress on the extraction of bioactive active compounds from food waste using novel extraction techniques.

Details

Nutrition & Food Science , vol. 52 no. 8
Type: Research Article
ISSN: 0034-6659

Keywords

1 – 10 of over 1000