Search results
1 – 10 of over 1000Subhamoy Dhua, Kshitiz Kumar, Vijay Singh Sharanagat and Prabhat K. Nema
The amount of food wasted every year is 1.3 billion metric tonne (MT), out of which 0.5 billion MT is contributed by the fruits processing industries. The waste includes…
Abstract
Purpose
The amount of food wasted every year is 1.3 billion metric tonne (MT), out of which 0.5 billion MT is contributed by the fruits processing industries. The waste includes by-products such as peels, pomace and seeds and is a good source of bioactive compounds like phenolic compounds, flavonoids, pectin lipids and dietary fibres. Hence, the purpose of the present study is to review the novel extraction techniques used for the extraction of the bio active compounds from food waste for the selection of suitable extraction method.
Design/methodology/approach
Novel extraction techniques such as ultrasound-assisted extraction, microwave-assisted extraction, enzyme-assisted extraction, supercritical fluid extraction, pulsed electric field extraction and pressurized liquid extraction have emerged to overcome the drawbacks and constraints of conventional extraction techniques. Hence, this study is focussed on novel extraction techniques, their limitations and optimization for the extraction of bioactive compounds from fruit and vegetable waste.
Findings
This study presents a comprehensive review on the novel extraction processes that have been adopted for the extraction of bioactive compounds from food waste. This paper also summarizes bioactive compounds' optimum extraction condition from various food waste using novel extraction techniques.
Research limitations/implications
Food waste is rich in bioactive compounds, and its efficient extraction may add value to the food processing industries. Hence, compressive analysis is needed to overcome the problem associated with the extraction and selection of suitable extraction techniques.
Social implications
Selection of a suitable extraction method will not only add value to food waste but also reduce waste dumping and the cost of bioactive compounds.
Originality/value
This paper presents the research progress on the extraction of bioactive active compounds from food waste using novel extraction techniques.
Details
Keywords
Zhuoxuan Jiang, Chunyan Miao and Xiaoming Li
Recent years have witnessed the rapid development of massive open online courses (MOOCs). With more and more courses being produced by instructors and being participated by…
Abstract
Purpose
Recent years have witnessed the rapid development of massive open online courses (MOOCs). With more and more courses being produced by instructors and being participated by learners all over the world, unprecedented massive educational resources are aggregated. The educational resources include videos, subtitles, lecture notes, quizzes, etc., on the teaching side, and forum contents, Wiki, log of learning behavior, log of homework, etc., on the learning side. However, the data are both unstructured and diverse. To facilitate knowledge management and mining on MOOCs, extracting keywords from the resources is important. This paper aims to adapt the state-of-the-art techniques to MOOC settings and evaluate the effectiveness on real data. In terms of practice, this paper also tries to answer the questions for the first time that to what extend can the MOOC resources support keyword extraction models, and how many human efforts are required to make the models work well.
Design/methodology/approach
Based on which side generates the data, i.e instructors or learners, the data are classified to teaching resources and learning resources, respectively. The approach used on teaching resources is based on machine learning models with labels, while the approach used on learning resources is based on graph model without labels.
Findings
From the teaching resources, the methods used by the authors can accurately extract keywords with only 10 per cent labeled data. The authors find a characteristic of the data that the resources of various forms, e.g. subtitles and PPTs, should be separately considered because they have the different model ability. From the learning resources, the keywords extracted from MOOC forums are not as domain-specific as those extracted from teaching resources, but they can reflect the topics which are lively discussed in forums. Then instructors can get feedback from the indication. The authors implement two applications with the extracted keywords: generating concept map and generating learning path. The visual demos show they have the potential to improve learning efficiency when they are integrated into a real MOOC platform.
Research limitations/implications
Conducting keyword extraction on MOOC resources is quite difficult because teaching resources are hard to be obtained due to copyrights. Also, getting labeled data is tough because usually expertise of the corresponding domain is required.
Practical implications
The experiment results support that MOOC resources are good enough for building models of keyword extraction, and an acceptable balance between human efforts and model accuracy can be achieved.
Originality/value
This paper presents a pioneer study on keyword extraction on MOOC resources and obtains some new findings.
Details
Keywords
Prajowal Manandhar, Prashanth Reddy Marpu and Zeyar Aung
We make use of the Volunteered Geographic Information (VGI) data to extract the total extent of the roads using remote sensing images. VGI data is often provided only as vector…
Abstract
We make use of the Volunteered Geographic Information (VGI) data to extract the total extent of the roads using remote sensing images. VGI data is often provided only as vector data represented by lines and not as full extent. Also, high geolocation accuracy is not guaranteed and it is common to observe misalignment with the target road segments by several pixels on the images. In this work, we use the prior information provided by the VGI and extract the full road extent even if there is significant mis-registration between the VGI and the image. The method consists of image segmentation and traversal of multiple agents along available VGI information. First, we perform image segmentation, and then we traverse through the fragmented road segments using autonomous agents to obtain a complete road map in a semi-automatic way once the seed-points are defined. The road center-line in the VGI guides the process and allows us to discover and extract the full extent of the road network based on the image data. The results demonstrate the validity and good performance of the proposed method for road extraction that reflects the actual road width despite the presence of disturbances such as shadows, cars and trees which shows the efficiency of the fusion of the VGI and satellite images.
Details
Keywords
Mohd Hanafi Azman Ong, Norazlina Mohd Yasin and Nur Syafikah Ibrahim
Measuring internal response of online learning is seen as fundamental to absorptive capacity which stimulates knowledge assimilation. However, the evaluation of practice and…
Abstract
Purpose
Measuring internal response of online learning is seen as fundamental to absorptive capacity which stimulates knowledge assimilation. However, the evaluation of practice and research of validated instruments that could effectively measure online learning response behavior is limited. Thus, in this study, a new instrument was designed based on literature to determine the structural variables that exist in the online learning response behavior.
Design/methodology/approach
A structured survey was designed and distributed to 410 Malaysian students enrolled in higher-education institutions. The questionnaire has 38 items, all of which were scored using a seven-point likert scale. To begin with, exploratory factor analysis with three types of extraction methods (i.e. principal component, principal axis factoring and maximum likelihood) was used as the method for comparing the outcomes of each extraction method's grouping variables by constantly using a varimax rotation method. In the second phase, reliability analysis was performed to determine the reliability level of the grouping variables, and finally, correlation analysis was performed to determine the discriminant nomological validity of the grouping variables.
Findings
The findings revealed that nine grouping variables were retrieved, with all items having a good value of factor loading and communalities, as well as an adequate degree of reliability. These extracted variables have good discriminant and nomological validity, as evidenced by correlation analysis, which confirmed that the directions of relationships among the extracted dimensions follow the expected theory (i.e. positive direction) and the correlation coefficient is less than 0.70.
Research limitations/implications
This study proposes a comprehensive set of questionnaires that measure the student's online learning response behavior. These questionnaires have been developed on the basis of an extensive literature review and have undergone a rigorous process of validity and reliability for the purpose of enhancing students' online learning response behavior.
Originality/value
This study's findings will aid academic practitioners in assessing the online learning response behavior of students, as well as enhancing the questionnaire's boost factor when administered in an online learning environment.
Details
Keywords
Linzi Wang, Qiudan Li, Jingjun David Xu and Minjie Yuan
Mining user-concerned actionable and interpretable hot topics will help management departments fully grasp the latest events and make timely decisions. Existing topic models…
Abstract
Purpose
Mining user-concerned actionable and interpretable hot topics will help management departments fully grasp the latest events and make timely decisions. Existing topic models primarily integrate word embedding and matrix decomposition, which only generates keyword-based hot topics with weak interpretability, making it difficult to meet the specific needs of users. Mining phrase-based hot topics with syntactic dependency structure have been proven to model structure information effectively. A key challenge lies in the effective integration of the above information into the hot topic mining process.
Design/methodology/approach
This paper proposes the nonnegative matrix factorization (NMF)-based hot topic mining method, semantics syntax-assisted hot topic model (SSAHM), which combines semantic association and syntactic dependency structure. First, a semantic–syntactic component association matrix is constructed. Then, the matrix is used as a constraint condition to be incorporated into the block coordinate descent (BCD)-based matrix decomposition process. Finally, a hot topic information-driven phrase extraction algorithm is applied to describe hot topics.
Findings
The efficacy of the developed model is demonstrated on two real-world datasets, and the effects of dependency structure information on different topics are compared. The qualitative examples further explain the application of the method in real scenarios.
Originality/value
Most prior research focuses on keyword-based hot topics. Thus, the literature is advanced by mining phrase-based hot topics with syntactic dependency structure, which can effectively analyze the semantics. The development of syntactic dependency structure considering the combination of word order and part-of-speech (POS) is a step forward as word order, and POS are only separately utilized in the prior literature. Ignoring this synergy may miss important information, such as grammatical structure coherence and logical relations between syntactic components.
Details
Keywords
Loris Nanni and Sheryl Brahnam
Automatic DNA-binding protein (DNA-BP) classification is now an essential proteomic technology. Unfortunately, many systems reported in the literature are tested on only one or…
Abstract
Purpose
Automatic DNA-binding protein (DNA-BP) classification is now an essential proteomic technology. Unfortunately, many systems reported in the literature are tested on only one or two datasets/tasks. The purpose of this study is to create the most optimal and universal system for DNA-BP classification, one that performs competitively across several DNA-BP classification tasks.
Design/methodology/approach
Efficient DNA-BP classifier systems require the discovery of powerful protein representations and feature extraction methods. Experiments were performed that combined and compared descriptors extracted from state-of-the-art matrix/image protein representations. These descriptors were trained on separate support vector machines (SVMs) and evaluated. Convolutional neural networks with different parameter settings were fine-tuned on two matrix representations of proteins. Decisions were fused with the SVMs using the weighted sum rule and evaluated to experimentally derive the most powerful general-purpose DNA-BP classifier system.
Findings
The best ensemble proposed here produced comparable, if not superior, classification results on a broad and fair comparison with the literature across four different datasets representing a variety of DNA-BP classification tasks, thereby demonstrating both the power and generalizability of the proposed system.
Originality/value
Most DNA-BP methods proposed in the literature are only validated on one (rarely two) datasets/tasks. In this work, the authors report the performance of our general-purpose DNA-BP system on four datasets representing different DNA-BP classification tasks. The excellent results of the proposed best classifier system demonstrate the power of the proposed approach. These results can now be used for baseline comparisons by other researchers in the field.
Details
Keywords
Julián Monsalve-Pulido, Jose Aguilar, Edwin Montoya and Camilo Salazar
This article proposes an architecture of an intelligent and autonomous recommendation system to be applied to any virtual learning environment, with the objective of efficiently…
Abstract
This article proposes an architecture of an intelligent and autonomous recommendation system to be applied to any virtual learning environment, with the objective of efficiently recommending digital resources. The paper presents the architectural details of the intelligent and autonomous dimensions of the recommendation system. The paper describes a hybrid recommendation model that orchestrates and manages the available information and the specific recommendation needs, in order to determine the recommendation algorithms to be used. The hybrid model allows the integration of the approaches based on collaborative filter, content or knowledge. In the architecture, information is extracted from four sources: the context, the students, the course and the digital resources, identifying variables, such as individual learning styles, socioeconomic information, connection characteristics, location, etc. Tests were carried out for the creation of an academic course, in order to analyse the intelligent and autonomous capabilities of the architecture.
Details
Keywords
Shilei Wang, Zhan Peng, Guixian Liu, Weile Qiang and Chi Zhang
In this paper, a high-frequency radar test system was used to collect the data of clean ballast bed and fouled ballast bed of ballasted tracks, respectively, for a quantitative…
Abstract
Purpose
In this paper, a high-frequency radar test system was used to collect the data of clean ballast bed and fouled ballast bed of ballasted tracks, respectively, for a quantitative evaluation of the condition of railway ballast bed.
Design/methodology/approach
Based on original radar signals, the time–frequency characteristics of radar signals were analyzed, five ballast bed condition characteristic indexes were proposed, including the frequency domain integral area, scanning area, number of intersections with the time axis, number of time-domain inflection points and amplitude envelope obtained by Hilbert transform, and the effectiveness and sensitivity of the indexes were analyzed.
Findings
The thickness of ballast bed tested at the sleep bottom by high-frequency radar is up to 55 cm, which meets the requirements of ballast bed detection. Compared with clean ballast bed, the values of the five indexes of fouled ballast bed are larger, and the five indexes could effectively show the condition of the ballast bed. The computational efficiency of amplitude envelope obtained by Hilbert transform is 140 s·km−1, and the computational efficiency of other indexes is 5 s·km−1. The amplitude envelopes obtained by Hilbert transform in the subgrade sections and tunnel sections are the most sensitive, followed by scanning area. The number of intersections with the time axis in the bridge sections was the most sensitive, followed by the scanning area. The scanning area can adapt to different substructures such as subgrade, bridges and tunnels, with high comprehensive sensitivity.
Originality/value
The research can provide appropriate characteristic indexes from the high-frequency radar original signal to quantitatively evaluate ballast bed condition under different substructures.
Details
Keywords
Joseph F. Hair, Marcelo L.D.S. Gabriel, Dirceu da Silva and Sergio Braga Junior
This paper aims to present the fundamental aspects for the development and validation (D&V) of attitudes’ measurement scale, as well as its practical aspects that are not deeply…
Abstract
Purpose
This paper aims to present the fundamental aspects for the development and validation (D&V) of attitudes’ measurement scale, as well as its practical aspects that are not deeply explored in books and manuals. These aspects are the results of a long experience of the authors and arduous learning with errors and mistakes.
Design/methodology/approach
The nature of this paper is methodological and can be very useful for an initial reading on the theme that it rests. This paper presents four D&V stages: literature review or interviews with experts; theoretical or face validation; semantic validation or validation with possible respondents; and statistical validation.
Findings
This is a methodological paper, and its main finding is the usefulness for researchers.
Research limitations/implications
The main implication of this paper is to support researchers on the process of D&V of measurement scales.
Practical implications
Became a step-by-step guide to researchers on the D&V of measurement scales.
Social implications
Support researchers on their data collection and analysis.
Originality/value
This is a practical guide, with tips from seasoned scholars to help researchers on the D&V of measurement scales.
Details