Search results
1 – 10 of over 6000Subhamoy Dhua, Kshitiz Kumar, Vijay Singh Sharanagat and Prabhat K. Nema
The amount of food wasted every year is 1.3 billion metric tonne (MT), out of which 0.5 billion MT is contributed by the fruits processing industries. The waste includes…
Abstract
Purpose
The amount of food wasted every year is 1.3 billion metric tonne (MT), out of which 0.5 billion MT is contributed by the fruits processing industries. The waste includes by-products such as peels, pomace and seeds and is a good source of bioactive compounds like phenolic compounds, flavonoids, pectin lipids and dietary fibres. Hence, the purpose of the present study is to review the novel extraction techniques used for the extraction of the bio active compounds from food waste for the selection of suitable extraction method.
Design/methodology/approach
Novel extraction techniques such as ultrasound-assisted extraction, microwave-assisted extraction, enzyme-assisted extraction, supercritical fluid extraction, pulsed electric field extraction and pressurized liquid extraction have emerged to overcome the drawbacks and constraints of conventional extraction techniques. Hence, this study is focussed on novel extraction techniques, their limitations and optimization for the extraction of bioactive compounds from fruit and vegetable waste.
Findings
This study presents a comprehensive review on the novel extraction processes that have been adopted for the extraction of bioactive compounds from food waste. This paper also summarizes bioactive compounds' optimum extraction condition from various food waste using novel extraction techniques.
Research limitations/implications
Food waste is rich in bioactive compounds, and its efficient extraction may add value to the food processing industries. Hence, compressive analysis is needed to overcome the problem associated with the extraction and selection of suitable extraction techniques.
Social implications
Selection of a suitable extraction method will not only add value to food waste but also reduce waste dumping and the cost of bioactive compounds.
Originality/value
This paper presents the research progress on the extraction of bioactive active compounds from food waste using novel extraction techniques.
Details
Keywords
Saheed Adewale Omoniyi, Michael Ayodele Idowu, Abiodun Aderoju Adeola and Adekunle Ayodeji Folorunso
This paper aims to review the chemical composition and industrial benefits of oil extracted from dikanut kernels.
Abstract
Purpose
This paper aims to review the chemical composition and industrial benefits of oil extracted from dikanut kernels.
Design/methodology/approach
Several literatures on chemical composition of dikanut kernels, methods of oil extraction from dikanut kernels and chemical composition of oil extracted from dikanut kernels were critically reviewed.
Findings
The review showed that proximate composition of dikanut kernels ranged from 2.10 to 11.90 per cent, 7.70 to 9.24 per cent, 51.32 to 70.80 per cent, 0.86 to 10.23 per cent, 2.26 to 6.80 per cent and 10.72 to 26.02 per cent for moisture, crude protein, crude fat, crude fibre, ash and carbohydrate contents, respectively. The methods of oil extraction from dikanut kernels include soxhlet extraction method, novel extraction method, enzymatic extraction method and pressing method. The quality attributes of dikanut kernel oil ranged from 1.59 to 4.70 g/100g, 0.50 to 2.67 meq/Kg, 4.30 to 13.40 g/100g, 187.90 to 256.50 mg KOH/g and 3.18 to 12.94 mg KOH/g for free fatty acid, peroxide value, iodine value, saponification value and acid value, respectively. Also, the percentage compositions of oleic, myristic, stearic, linolenic, palmitic, lauric, saturated fatty acids, monosaturated fatty acids and polyunsaturated fatty acids ranging from 0.00 to 6.90, 20.50 to 61.68, 0.80 to 11.40, 0.27 to 6.40, 5.06 to 10.30, 27.63 to 40.70, 97.45 to 98.73, 1.82 to 2.12 and 0.27 to 0.49 respectively. The results showed that dikanut kernels has appreciable amount of protein, carbohydrate and high level of fat content while oil extracted from dikanut kernels have high saponification value, high myristic acid and high lauric acid.
Research limitations/implications
There are scanty information/published works on industrial products made from oil extracted from dikanut kernels.
Practical implications
The review helps in identifying different methods of extraction of oil from dikanut kernels apart from popular soxhlet extraction method (uses of organic solvent). Also, it helps to identify the domestic and industrial benefits of oil extracted from dikanut kernels.
Originality/value
The review showed that oil extracted from dikanut kernels could be useful as food additive, flavour ingredient, coating fresh citrus fruits and in the manufacture of margarine, oil creams, cooking oil, defoaming agent, cosmetics and pharmaceutical products.
Details
Keywords
Suyong Yeon, ChangHyun Jun, Hyunga Choi, Jaehyeon Kang, Youngmok Yun and Nakju Lett Doh
– The authors aim to propose a novel plane extraction algorithm for geometric 3D indoor mapping with range scan data.
Abstract
Purpose
The authors aim to propose a novel plane extraction algorithm for geometric 3D indoor mapping with range scan data.
Design/methodology/approach
The proposed method utilizes a divide-and-conquer step to efficiently handle huge amounts of point clouds not in a whole group, but in forms of separate sub-groups with similar plane parameters. This method adopts robust principal component analysis to enhance estimation accuracy.
Findings
Experimental results verify that the method not only shows enhanced performance in the plane extraction, but also broadens the domain of interest of the plane registration to an information-poor environment (such as simple indoor corridors), while the previous method only adequately works in an information-rich environment (such as a space with many features).
Originality/value
The proposed algorithm has three advantages over the current state-of-the-art method in that it is fast, utilizes more inlier sensor data that does not become contaminated by severe sensor noise and extracts more accurate plane parameters.
Details
Keywords
Marie Tirvaudey, Robin Bouclier, Jean-Charles Passieux and Ludovic Chamoin
The purpose of this paper is to further simplify the use of NURBS in industrial environnements. Although isogeometric analysis (IGA) has been the object of intensive studies over…
Abstract
Purpose
The purpose of this paper is to further simplify the use of NURBS in industrial environnements. Although isogeometric analysis (IGA) has been the object of intensive studies over the past decade, its massive deployment in industrial analysis still appears quite marginal. This is partly due to its implementation, which is not straightforward with respect to the elementary structure of finite element (FE) codes. This often discourages industrial engineers from adopting isogeometric capabilities in their well-established simulation environment.
Design/methodology/approach
Based on the concept of Bézier and Lagrange extractions, a novel method is proposed to implement IGA from an existing industrial FE code with the aim of bringing human implementation effort to the minimal possible level (only using standard input-output of finite element analysis (FEA) codes, avoid code-dependent subroutines implementation). An approximate global link to go from Lagrange polynomials to non-uniform-rational-B-splines functions is formulated, which enables the whole FE routines to be untouched during the implementation.
Findings
As a result, only the linear system resolution step is bypassed: the resolution is performed in an external script after projecting the FE system onto the reduced, more regular and isogeometric basis. The novel procedure is successfully validated through different numerical experiments involving linear and nonlinear isogeometric analyses using the standard input/output of the industrial FE software Code_Aster.
Originality/value
A non-invasive implementation of IGA into FEA software is proposed. The whole FE routines are untouched during the novel implementation procedure; a focus is made on the IGA solution of nonlinear problems from existing FEA software; technical details on the approach are provided by means of illustrative examples and step-by-step implementation; the methodology is evaluated on a range of two- and three-dimensional elasticity and elastoplasticity benchmarks solved using the commercial software Code_Aster.
Details
Keywords
For many pattern recognition problems, the relation between the sample vectors and the class labels are known during the data acquisition procedure. However, how to find the…
Abstract
Purpose
For many pattern recognition problems, the relation between the sample vectors and the class labels are known during the data acquisition procedure. However, how to find the useful rules or knowledge hidden in the data is very important and challengeable. Rule extraction methods are very useful in mining the important and heuristic knowledge hidden in the original high-dimensional data. It can help us to construct predictive models with few attributes of the data so as to provide valuable model interpretability and less training times.
Design/methodology/approach
In this paper, a novel rule extraction method with the application of biclustering algorithm is proposed.
Findings
To choose the most significant biclusters from the huge number of detected biclusters, a specially modified information entropy calculation method is also provided. It will be shown that all of the important knowledge is in practice hidden in these biclusters.
Originality/value
The novelty of the new method lies in the detected biclusters can be conveniently translated into if-then rules. It provides an intuitively explainable and comprehensive approach to extract rules from high-dimensional data while keeping high classification accuracy.
Details
Keywords
Jincan Zhang, Min Liu, Jinchan Wang and Kun Xu
High-speed Indium Phosphide (InP) HBTs have been widely used to design high-speed analog, digital and mixed-signal integrated circuits. The purpose of this study is to propose a…
Abstract
Purpose
High-speed Indium Phosphide (InP) HBTs have been widely used to design high-speed analog, digital and mixed-signal integrated circuits. The purpose of this study is to propose a new parameter extraction procedure for determining an improved T-topology small-signal equivalent circuit of InP heterojunction bipolar transistors (HBTs).
Design/methodology/approach
The alternating current crowding effect is considered through adding the intrinsic base capacitance in the small-signal equivalent circuit. All of the circuit parameters are extracted directly without using any approximation.
Findings
The extraction technique is more easily understood and clearer than other extraction methods, as the equations are derived from the S-parameters by peeling peripheral elements from small-signal models to get reduced ones and extracting each equivalent-circuit parameter using each equation.
Originality/value
To validate the presented parameter extraction technology, an n-p-n emitter-up InP HBT was analyzed adopting the method. Excellent agreement between measured and modeled S-parameters is obtained up to 40 GHz.
Details
Keywords
This paper aims to focus on data analytic tools and integrated data analyzing approaches used on smart energy meters (SEMs). Furthermore, while observing the diverse techniques…
Abstract
Purpose
This paper aims to focus on data analytic tools and integrated data analyzing approaches used on smart energy meters (SEMs). Furthermore, while observing the diverse techniques and frameworks of data analysis of SEM, the authors propose a novel framework for SEM by using gamification approach for enhancing the involvement of consumers to conserve energy and improve efficiency.
Design/methodology/approach
A few research strategies have been accounted for analyzing the raw data, yet at the same time, a considerable measure of work should be done in making these commercially reasonable. Data analytic tools and integrated data analyzing approaches are used on SEMs. Furthermore, while observing the diverse techniques and frameworks of data analysis of SEM, the authors propose a novel framework for SEM by using gamification approach for enhancing the involvement of consumers to conserve energy and improve efficiency. Advantages of SEM’s are additionally discussed for inspiring consumers, utilities and their respective partners.
Findings
Consumers, utilities and researchers can also take benefit of the recommended framework by planning their routine activities and enjoying rewards offered by gamification approach. Through gamification, consumers’ commitment enhances, and it changes their less manageable conduct on an intentional premise. The practical implementation of such approaches showed the improved energy efficiency as a consequence.
Details
Keywords
Chantola Kit, Toshiyuki Amagasa and Hiroyuki Kitagawa
The purpose of this paper is to propose efficient algorithms for structural grouping over Extensible Markup Language (XML) data, called TOPOLOGICAL ROLLUP (T‐ROLLUP), which are to…
Abstract
Purpose
The purpose of this paper is to propose efficient algorithms for structural grouping over Extensible Markup Language (XML) data, called TOPOLOGICAL ROLLUP (T‐ROLLUP), which are to compute aggregation functions based on XML data with multiple hierarchical levels. They play important roles in the online analytical processing of XML data, called XML‐OLAP, with which complex analysis over XML can be performed to discover valuable information from XML.
Design/methodology/approach
Several variations of algorithms are proposed for efficient T‐ROLLUP computation. First, two basic algorithms, top‐down algorithm (TDA) and bottom‐up algorithm (BUA), are presented in which the well‐known structural‐join algorithms are used. The paper then proposes more efficient algorithms, called single‐scan by preorder number and single‐scan by postorder number (SSC‐Pre/Post), which are also based on structural joins, but have been modified from the basic algorithms so that multiple levels of grouping are computed with a single scan over node lists. In addition, the paper attempts to adopt the algorithm for parallel execution in multi‐core environments.
Findings
Several experiments are conducted with XMark and synthetic XML data to show the effectiveness of the proposed algorithms. The experiments show that proposed algorithms perform much better than the naïve implementation. In particular, the proposed SSC‐Pre and SSC‐Post perform better than TDA and BUA for all cases. Beyond that, the experiment using the parallel single scan algorithm also shows better performance than the ordinary basic algorithm.
Research limitations/implications
This paper focuses on the T‐ROLLUP operation for XML data analysis. For this reason, other operations related to XML‐OLAP, such as CUBE, WINDOWING, and RANKING should also be investigated.
Originality/value
The paper presents an extended version of one of the award winning papers at iiWAS2008.
Details
Keywords
Jun Deng, Chuyi Zhong, Shaodan Sun and Ruan Wang
This paper aims to construct a spatio-temporal emotional framework (STEF) for digital humanities from a quantitative perspective, applying knowledge extraction and mining…
Abstract
Purpose
This paper aims to construct a spatio-temporal emotional framework (STEF) for digital humanities from a quantitative perspective, applying knowledge extraction and mining technology to promote innovation of humanities research paradigm and method.
Design/methodology/approach
The proposed STEF uses methods of information extraction, sentiment analysis and geographic information system to achieve knowledge extraction and mining. STEF integrates time, space and emotional elements to visualize the spatial and temporal evolution of emotions, which thus enriches the analytical paradigm in digital humanities.
Findings
The case study shows that STEF can effectively extract knowledge from unstructured texts in the field of Chinese Qing Dynasty novels. First, STEF introduces the knowledge extraction tools – MARKUS and DocuSky – to profile character entities and perform plots extraction. Second, STEF extracts the characters' emotional evolutionary trajectory from the temporal and spatial perspective. Finally, the study draws a spatio-temporal emotional path figure of the leading characters and integrates the corresponding plots to analyze the causes of emotion fluctuations.
Originality/value
The STEF is constructed based on the “spatio-temporal narrative theory” and “emotional narrative theory”. It is the first framework to integrate elements of time, space and emotion to analyze the emotional evolution trajectories of characters in novels. The execuability and operability of the framework is also verified with a case novel to suggest a new path for quantitative analysis of other novels.
Details
Keywords
Jochen Hartmann and Oded Netzer
The increasing importance and proliferation of text data provide a unique opportunity and novel lens to study human communication across a myriad of business and marketing…
Abstract
The increasing importance and proliferation of text data provide a unique opportunity and novel lens to study human communication across a myriad of business and marketing applications. For example, consumers compare and review products online, individuals interact with their voice assistants to search, shop, and express their needs, investors seek to extract signals from firms' press releases to improve their investment decisions, and firms analyze sales call transcripts to increase customer satisfaction and conversions. However, extracting meaningful information from unstructured text data is a nontrivial task. In this chapter, we review established natural language processing (NLP) methods for traditional tasks (e.g., LDA for topic modeling and lexicons for sentiment analysis and writing style extraction) and provide an outlook into the future of NLP in marketing, covering recent embedding-based approaches, pretrained language models, and transfer learning for novel tasks such as automated text generation and multi-modal representation learning. These emerging approaches allow the field to improve its ability to perform certain tasks that we have been using for more than a decade (e.g., text classification). But more importantly, they unlock entirely new types of tasks that bring about novel research opportunities (e.g., text summarization, and generative question answering). We conclude with a roadmap and research agenda for promising NLP applications in marketing and provide supplementary code examples to help interested scholars to explore opportunities related to NLP in marketing.
Details