Search results
1 – 10 of over 2000
The purpose of this paper is to find a practical method to minimize unnecessary rebroadcasts for ad hoc networks in a remote area.
Abstract
Purpose
The purpose of this paper is to find a practical method to minimize unnecessary rebroadcasts for ad hoc networks in a remote area.
Design/methodology/approach
The authors design a theoretical coverage estimation model for autonomous broadcast pruning. To verify the effect of the model, simulations are performed to evaluate the effect of confidence level and the actual performance by using ns2 network simulator.
Findings
The algorithm used to predict the coverage area of broadcasts can minimize unnecessary rebroadcasts.
Originality/value
Autonomous broadcast pruning scheme based on the local prediction of a remained coverage area in the on‐going broadcast process brings us beneficial results in terms of energy savings and limited bandwidth preservation.
Details
Keywords
Mostafa El Habib Daho, Nesma Settouti, Mohammed El Amine Bechar, Amina Boublenza and Mohammed Amine Chikh
Ensemble methods have been widely used in the field of pattern recognition due to the difficulty of finding a single classifier that performs well on a wide variety of problems…
Abstract
Purpose
Ensemble methods have been widely used in the field of pattern recognition due to the difficulty of finding a single classifier that performs well on a wide variety of problems. Despite the effectiveness of these techniques, studies have shown that ensemble methods generate a large number of hypotheses and that contain redundant classifiers in most cases. Several works proposed in the state of the art attempt to reduce all hypotheses without affecting performance.
Design/methodology/approach
In this work, the authors are proposing a pruning method that takes into consideration the correlation between classifiers/classes and each classifier with the rest of the set. The authors have used the random forest algorithm as trees-based ensemble classifiers and the pruning was made by a technique inspired by the CFS (correlation feature selection) algorithm.
Findings
The proposed method CES (correlation-based Ensemble Selection) was evaluated on ten datasets from the UCI machine learning repository, and the performances were compared to six ensemble pruning techniques. The results showed that our proposed pruning method selects a small ensemble in a smaller amount of time while improving classification rates compared to the state-of-the-art methods.
Originality/value
CES is a new ordering-based method that uses the CFS algorithm. CES selects, in a short time, a small sub-ensemble that outperforms results obtained from the whole forest and the other state-of-the-art techniques used in this study.
Details
Keywords
Vítor Tinoco, Manuel F. Silva, Filipe N. Santos, António Valente, Luís F. Rocha, Sandro A. Magalhães and Luis C. Santos
The motivation for robotics research in the agricultural field has sparked in consequence of the increasing world population and decreasing agricultural labor availability. This…
Abstract
Purpose
The motivation for robotics research in the agricultural field has sparked in consequence of the increasing world population and decreasing agricultural labor availability. This paper aims to analyze the state of the art of pruning and harvesting manipulators used in agriculture.
Design/methodology/approach
A research was performed on papers that corresponded to specific keywords. Ten papers were selected based on a set of attributes that made them adequate for review.
Findings
The pruning manipulators were used in two different scenarios: grapevines and apple trees. These manipulators showed that a light-controlled environment could reduce visual errors and that prismatic joints on the manipulator are advantageous to obtain a higher reach. The harvesting manipulators were used for three types of fruits: strawberries, tomatoes and apples. These manipulators revealed that different kinematic configurations are required for different kinds of end-effectors, as some of these tools only require movement in the horizontal axis and others are required to reach the target with a broad range of orientations.
Originality/value
This work serves to reduce the gap in the literature regarding agricultural manipulators and will support new developments of novel solutions related to agricultural robotic grasping and manipulation.
Details
Keywords
James T. Luxhoj and Gene A. Giacomelli
The development of labour standards for the single truss tomatoproduction system is examined. Both time study and predetermined timesystems, such as the Element Times for…
Abstract
The development of labour standards for the single truss tomato production system is examined. Both time study and predetermined time systems, such as the Element Times for Agriculture (ETA) tables and the Maynard Operation Sequence Technique (MOST) tables, are used to determine labour standards for the operations of pruning and harvesting in a single truss tomato production system. The hypothesis is that a predetermined time system could be used to establish greenhouse labour standards, and thus replace the tedious and costly process of direct time study. Such a work measurement system would enable the setting of job standards quickly and accurately. Standardised work models will facilitate cost control of labour operations, and provide data for evaluation of labour costs within future greenhouse system designs. The data indicate that, although the pre‐determined time values varied from measured time study by around 6 per cent to over 23 per cent for pruning, the variation for harvesting ranged approximately from 3 per cent to 7 per cent. The combined results suggest that predetermined time systems can be used effectively to establish greenhouse labour standards for short cycle tasks without the loss of significant accuracy when using an absolute scale.
Details
Keywords
Zunhui Zhao, Haibin Shang, Pingyuan Cui and Xiangyu Huang
The purpose of this paper is to present a solution space searching method to study the initial design of interplanetary low thrust gravity assist trajectory.
Abstract
Purpose
The purpose of this paper is to present a solution space searching method to study the initial design of interplanetary low thrust gravity assist trajectory.
Design/methodology/approach
For reducing the complexity and nonlinearity of the initial design problem, a sixth degree inverse polynomial shape based approach is brought. Then some improvements are provided for solving the parameters in the shape function and a quasi‐lambert solver is brought through the shape based method, the thrust profile can be generated under the given time of flight, boundary states including positions and velocities for low thrust phase. Combining gravity assist model, the problem is summarized and an improved pruning technique is used for searching the feasible solution space for low thrust gravity assist trajectory.
Findings
Using the solution space searching method, the feasible solution region would be generated under the given mission condition. The treatment about gravity assist demonstrates more accurate than previous method. Also another advantage is that the searching method can be used to design different types of mission trajectory, including flyby and rendezvous trajectories.
Practical implications
The method can be used as an efficient approach to search the feasible region for the complex low thrust gravity assist trajectory, and it can provide appropriate initial guesses for the low thrust gravity assist trajectory in mission design phase.
Originality/value
Feasible solution space would be obtained through the searching method. The quasi‐Lambert solver in the paper is found under the shape‐based method and relative improvement, and it shows its availability during the searching process. Through mission trajectory design, the effectiveness of the method is shown.
Details
Keywords
This paper aims to identify the problem situations leading financial firms to kick off the elimination decision‐making process for financial products in their line, measure the…
Abstract
Purpose
This paper aims to identify the problem situations leading financial firms to kick off the elimination decision‐making process for financial products in their line, measure the importance of problem situations, and assess the effects of a set of contextual variables on the above importance.
Design/methodology/approach
The study took place in the UK; data were collected through 20 in‐depth interviews with managers of financial firms and a mail survey to a stratified random sample of financial firms, which yielded 112 returns.
Findings
Eight problem situations are identified and their importance is measured. The results indicate that the importance of problem situations is highly situation‐specific: it varies in relation to the degree of a financial firm's market orientation, the intensity of competition, the austerity of the regulatory environment, and the rhythm of technological change.
Research limitations/implications
From a theoretical standpoint, future research on the investigation of the importance of decision variables pertaining to line pruning must always take into consideration the internal and the external context of the firm. From a practical standpoint, this study has important policy implications, since it provides managers with a first picture of the effects of selected contextual forces on the importance of the problem situations triggering line pruning in services settings. The limitations of the study provide useful avenues for future investigation.
Originality/value
This study represents the first attempt to measure the importance of different problem situations triggering line pruning in financial services and relate that importance to a set of contextual variables. As such, it makes a clear theoretical contribution.
Details
Keywords
József Valyon and Gábor Horváth
The purpose of this paper is to present extended least squares support vector machines (LS‐SVM) where data selection methods are used to get sparse LS‐SVM solution, and to…
Abstract
Purpose
The purpose of this paper is to present extended least squares support vector machines (LS‐SVM) where data selection methods are used to get sparse LS‐SVM solution, and to overview and compare the most important data selection approaches.
Design/methodology/approach
The selection methods are compared based on their theoretical background and using extensive simulations.
Findings
The paper shows that partial reduction is an efficient way of getting a reduced complexity sparse LS‐SVM solution, while partial reduction exploits full knowledge contained in the whole training data set. It also shows that the reduction technique based on reduced row echelon form (RREF) of the kernel matrix is superior when compared to other data selection approaches.
Research limitations/implications
Data selection for getting a sparse LS‐SVM solution can be done in the different representations of the training data: in the input space, in the intermediate feature space, and in the kernel space. Selection in the kernel space can be obtained by finding an approximate basis of the kernel matrix.
Practical implications
The RREF‐based method is a data selection approach with a favorable property: there is a trade‐off tolerance parameter that can be used for balancing complexity and accuracy.
Originality/value
The paper gives contributions to the construction of high‐performance and moderate complexity LS‐SVMs.
Details
Keywords
Melanie Volkamer, Karen Renaud and Paul Gerber
Phishing is still a very popular and effective security threat, and it takes, on average, more than a day to detect new phish websites. Protection by purely technical means is…
Abstract
Purpose
Phishing is still a very popular and effective security threat, and it takes, on average, more than a day to detect new phish websites. Protection by purely technical means is hampered by this vulnerability window. During this window, users need to act to protect themselves. To support users in doing so, the paper aims to propose to first make users aware of the need to consult the address bar. Moreover, the authors propose to prune URL displayed in the address bar. The authors report on an evaluation of this proposal.
Design/methodology/approach
The paper opted for an online study with 411 participants, judging 16 websites – all with authentic design: half with legitimate and half with phish URLs. The authors applied four popular widely used types of URL manipulation techniques. The authors conducted a within-subject and between-subject study with participants randomly assigned to one of two groups (domain highlighting or pruning). The authors then tested both proposals using a repeated-measures multivariate analysis of variance.
Findings
The analysis shows a significant improvement in terms of phish detection after providing the hint to check the address bar. Furthermore, the analysis shows a significant improvement in terms of phish detection after the hint to check the address bar for uninitiated participants in the pruning group, as compared to those in the highlighting group.
Research limitations/implications
Because of the chosen research approach, the research results may lack generalisability. Therefore, researchers are encouraged to test the proposed propositions further.
Practical implications
This paper confirms the efficacy of URL pruning and of prompting users to consult the address bar for phish detection.
Originality/value
This paper introduces a classification for URL manipulation techniques used by phishers. We also provide evidence that drawing people’s attention to the address bar makes them more likely to spot phish websites, but does not impair their ability to identify authentic websites.
Details
Keywords
Zhaokun Huang and Yufang Liang
Taking the discipline construction in colleges and universities as the application background, based on the research on data mining technology and decision support system…
Abstract
Purpose
Taking the discipline construction in colleges and universities as the application background, based on the research on data mining technology and decision support system technology, the data generated by university management information system are effectively utilized. The paper aims to discuss these issues.
Design/methodology/approach
Based on the Beijing Key Discipline Information Platform as the data source, the decision tree algorithm of data mining is studied. On the basis of decision tree C4.5, the Bayesian theory is applied to the post-pruning operation of the decision tree.
Findings
A decision tree post-pruning algorithm based on the Bayesian theory is studied and put forward in order to simplify the decision tree, which improves the generalization ability of the whole algorithm. Finally, the algorithm is used to build the prediction model of key disciplines. Combined with the decision support system architecture, data warehouse and the data mining algorithm constructed by university discipline, based on J2EE standard enterprise system specification, MVC model is applied. Moreover, a prototype system of decision support system for discipline construction in colleges and universities with browser/server (B/S) structure is completed and implemented.
Originality/value
A decision tree post-pruning algorithm based on the Bayesian theory is studied and put forward in order to simplify the decision tree, which improves the generalization ability of the whole algorithm. Finally, the algorithm is used to build the prediction model of key disciplines. Combined with the decision support system architecture, data warehouse and the data mining algorithm constructed by university discipline, based on J2EE standard enterprise system specification, MVC model is applied. Moreover, a prototype system of decision support system for discipline construction in colleges and universities with B/S structure is completed and implemented.
Details
Keywords
Yuze Shang, Fei Liu, Ping Qin, Zhizhong Guo and Zhe Li
The goal of this research is to develop a dynamic step path planning algorithm based on the rapidly exploring random tree (RRT) algorithm that combines Q-learning with the…
Abstract
Purpose
The goal of this research is to develop a dynamic step path planning algorithm based on the rapidly exploring random tree (RRT) algorithm that combines Q-learning with the Gaussian distribution of obstacles. A route for autonomous vehicles may be swiftly created using this algorithm.
Design/methodology/approach
The path planning issue is divided into three key steps by the authors. First, the tree expansion is sped up by the dynamic step size using a combination of Q-learning and the Gaussian distribution of obstacles. The invalid nodes are then removed from the initially created pathways using bidirectional pruning. B-splines are then employed to smooth the predicted pathways.
Findings
The algorithm is validated using simulations on straight and curved highways, respectively. The results show that the approach can provide a smooth, safe route that complies with vehicle motion laws.
Originality/value
An improved RRT algorithm based on Q-learning and obstacle Gaussian distribution (QGD-RRT) is proposed for the path planning of self-driving vehicles. Unlike previous methods, the authors use Q-learning to steer the tree's development direction. After that, the step size is dynamically altered following the density of the obstacle distribution to produce the initial path rapidly and cut down on planning time even further. In the aim to provide a smooth and secure path that complies with the vehicle kinematic and dynamical restrictions, the path is lastly optimized using an enhanced bidirectional pruning technique.
Details