Search results
1 – 10 of 252Mostafa Aliabadi and Hamidreza Ghaffari
In this paper, community identification has been considered as the most critical task of social network analysis. The purpose of this paper is to organize the nodes of a given…
Abstract
Purpose
In this paper, community identification has been considered as the most critical task of social network analysis. The purpose of this paper is to organize the nodes of a given network graph into distinct clusters or known communities. These clusters will therefore form the different communities available within the social network graph.
Design/methodology/approach
To date, numerous methods have been developed to detect communities in social networks through graph clustering techniques. The k-means algorithm stands out as one of the most well-known graph clustering algorithms, celebrated for its straightforward implementation and rapid processing. However, it has a serious drawback because it is insensitive to initial conditions and always settles on local optima rather than finding the global optimum. More recently, clustering algorithms that use a reciprocal KNN (k-nearest neighbors) graph have been used for data clustering. It skillfully overcomes many major shortcomings of k-means algorithms, especially about the selection of the initial centers of clusters. However, it does face its own challenge: sensitivity to the choice of the neighborhood size parameter k, which is crucial for selecting the nearest neighbors during the clustering process. In this design, the Jaya optimization method is used to select the K parameter in the KNN method.
Findings
The experiment on real-world network data results show that the proposed approach significantly improves the accuracy of methods in community detection in social networks. On the other hand, it seems to offer some potential for discovering a more refined hierarchy in social networks and thus becomes a useful tool in the analysis of social networks.
Originality/value
This paper introduces an enhancement to the KNN graph-based clustering method by proposing a local average vector method for selecting the optimal neighborhood size parameter k. Furthermore, it presents an improved Jaya algorithm with KNN graph-based clustering for more effective community detection in social network graphs.
Details
Keywords
Luís Jacques de Sousa, João Poças Martins and Luís Sanhudo
Factors like bid price, submission time, and number of bidders influence the procurement process in public projects. These factors and the award criteria may impact the project’s…
Abstract
Purpose
Factors like bid price, submission time, and number of bidders influence the procurement process in public projects. These factors and the award criteria may impact the project’s financial compliance. Predicting budget compliance in construction projects has been traditionally challenging, but Machine Learning (ML) techniques have revolutionised estimations.
Design/methodology/approach
In this study, Portuguese Public Procurement Data (PPPData) was utilised as the model’s input. Notably, this dataset exhibited a substantial imbalance in the target feature. To address this issue, the study evaluated three distinct data balancing techniques: oversampling, undersampling, and the SMOTE method. Next, a comprehensive feature selection process was conducted, leading to the testing of five different algorithms for forecasting budget compliance. Finally, a secondary test was conducted, refining the features to include only those elements that procurement technicians can modify while also considering the two most accurate predictors identified in the previous test.
Findings
The findings indicate that employing the SMOTE method on the scraped data can achieve a balanced dataset. Furthermore, the results demonstrate that the Adam ANN algorithm outperformed others, boasting a precision rate of 68.1%.
Practical implications
The model can aid procurement technicians during the tendering phase by using historical data and analogous projects to predict performance.
Social implications
Although the study reveals that ML algorithms cannot accurately predict budget compliance using procurement data, they can still provide project owners with insights into the most suitable criteria, aiding decision-making. Further research should assess the model’s impact and capacity within the procurement workflow.
Originality/value
Previous research predominantly focused on forecasting budgets by leveraging data from the private construction execution phase. While some investigations incorporated procurement data, this study distinguishes itself by using an imbalanced dataset and anticipating compliance rather than predicting budgetary figures. The model predicts budget compliance by analysing qualitative and quantitative characteristics of public project contracts. The research paper explores various model architectures and data treatment techniques to develop a model to assist the Client in tender definition.
Details
Keywords
Christina Anderl and Guglielmo Maria Caporale
The article aims to establish whether the degree of aversion to inflation and the responsiveness to deviations from potential output have changed over time.
Abstract
Purpose
The article aims to establish whether the degree of aversion to inflation and the responsiveness to deviations from potential output have changed over time.
Design/methodology/approach
This paper assesses time variation in monetary policy rules by applying a time-varying parameter generalised methods of moments (TVP-GMM) framework.
Findings
Using monthly data until December 2022 for five inflation targeting countries (the UK, Canada, Australia, New Zealand, Sweden) and five countries with alternative monetary regimes (the US, Japan, Denmark, the Euro Area, Switzerland), we find that monetary policy has become more averse to inflation and more responsive to the output gap in both sets of countries over time. In particular, there has been a clear shift in inflation targeting countries towards a more hawkish stance on inflation since the adoption of this regime and a greater response to both inflation and the output gap in most countries after the global financial crisis, which indicates a stronger reliance on monetary rules to stabilise the economy in recent years. It also appears that inflation targeting countries pay greater attention to the exchange rate pass-through channel when setting interest rates. Finally, monetary surprises do not seem to be an important determinant of the evolution over time of the Taylor rule parameters, which suggests a high degree of monetary policy transparency in the countries under examination.
Originality/value
It provides new evidence on changes over time in monetary policy rules.
Details
Keywords
Yimei Chen, Huanhuan Cheng and Baoquan Li
The purpose of this study is to propose a path-planning strategy based on the velocity-virtual spring method to realize collision-free tasks in dynamic environments and further…
Abstract
Purpose
The purpose of this study is to propose a path-planning strategy based on the velocity-virtual spring method to realize collision-free tasks in dynamic environments and further improve the effect.
Design/methodology/approach
By considering factors such as the relative velocity and direction of dynamic obstacles, the repulsive force of the robot is improved, thereby enhancing the adaptability of the strategy and achieving flexible and effective avoidance against dynamic obstacles. The attraction formula has been designed to allow the robot to have better smooth changes and higher gradients near the target, helping robots better reach the target and follow formations. Moreover, to meet the demands of the various stages during the driving process, the null space behavioral control is used to solve multi-task conflict problems and strengthen formation coordination and control.
Findings
Comparison of the planning path and formation effects through simulation and physical experiments, the results of this study show that the algorithm proposed can successfully maintain formation stability and plan smooth and safe paths in static or dynamic environments.
Originality/value
This paper proposes a path-planning strategy based on the velocity-virtual spring method to plan collision-free paths for formation in dynamic environments.
Details
Keywords
Andong Liu, Yawen Zhang, Jiayun Fu, Yuankun Yan and Wen-An Zhang
In response to the issue of traditional algorithms often falling into local minima or failing to find feasible solutions in manipulator path planning. The purpose of this paper is…
Abstract
Purpose
In response to the issue of traditional algorithms often falling into local minima or failing to find feasible solutions in manipulator path planning. The purpose of this paper is to propose a 3D artificial moment method (3D-AMM) for obstacle avoidance for the robotic arm's end-effector.
Design/methodology/approach
A new method for constructing temporary attractive points in 3D has been introduced using the vector triple product approach, which generates the attractive moments that attract the end-effector to move toward it. Second, distance weight factorization and spatial projection methods are introduced to improve the solution of repulsive moments in multiobstacle scenarios. Third, a novel motion vector-solving mechanism is proposed to provide nonzero velocity for the end-effector to solve the problem of limiting the solution of the motion vector to a fixed coordinate plane due to dimensionality constraints.
Findings
A comparative analysis was conducted between the proposed algorithm and the existing methods, the improved artificial potential field method and the rapidly-random tree method under identical simulation conditions. The results indicate that the 3D-AMM method successfully plans paths with smoother trajectories and reduces the path length by 20.03% to 36.9%. Additionally, the experimental comparison outcomes affirm the feasibility and effectiveness of this method for obstacle avoidance in industrial scenarios.
Originality/value
This paper proposes a 3D-AMM algorithm for manipulator path planning in Cartesian space with multiple obstacles. This method effectively solves the problem of the artificial potential field method easily falling into local minimum points and the low path planning success rate of the rapidly-exploring random tree method.
Details
Keywords
Penghai Deng, Quansheng Liu and Haifeng Lu
The purpose of this paper is to propose a new combined finite-discrete element method (FDEM) to analyze the mechanical properties, failure behavior and slope stability of soil…
Abstract
Purpose
The purpose of this paper is to propose a new combined finite-discrete element method (FDEM) to analyze the mechanical properties, failure behavior and slope stability of soil rock mixtures (SRM), in which the rocks within the SRM model have shape randomness, size randomness and spatial distribution randomness.
Design/methodology/approach
Based on the modeling method of heterogeneous rocks, the SRM numerical model can be built and by adjusting the boundary between soil and rock, an SRM numerical model with any rock content can be obtained. The reliability and robustness of the new modeling method can be verified by uniaxial compression simulation. In addition, this paper investigates the effects of rock topology, rock content, slope height and slope inclination on the stability of SRM slopes.
Findings
Investigations of the influences of rock content, slope height and slope inclination of SRM slopes showed that the slope height had little effect on the failure mode. The influences of rock content and slope inclination on the slope failure mode were significant. With increasing rock content and slope dip angle, SRM slopes gradually transitioned from a single shear failure mode to a multi-shear fracture failure mode, and shear fractures showed irregular and bifurcated characteristics in which the cut-off values of rock content and slope inclination were 20% and 80°, respectively.
Originality/value
This paper proposed a new modeling method for SRMs based on FDEM, with rocks having random shapes, sizes and spatial distributions.
Details
Keywords
Hend Monjed, Salma Ibrahim and Bjørn N. Jørgensen
This paper aims to examine the association between perceived firm risk and two reporting mechanisms: risk disclosure and earnings smoothing in the UK context.
Abstract
Purpose
This paper aims to examine the association between perceived firm risk and two reporting mechanisms: risk disclosure and earnings smoothing in the UK context.
Design/methodology/approach
This study juxtaposes three competing views, the “null”, the “divergence” and the “convergence” hypotheses, and empirically investigates whether risk disclosure and earnings smoothing affect firm perceived risk for a sample of large UK firms with rich and poor information environments. This study also uses the global financial crisis as an external shock on overall risk in the economy to investigate when and how managers use these two reporting mechanisms to shape the firm perceived risk.
Findings
This paper documents that risk disclosures have no significant effect on investors’ risk perceptions, consistent with risk disclosures containing boilerplate and generic statements about firm risk. This paper also finds that earnings smoothing reduces investors’ risk perceptions, reflecting investors’ interpretations about future firm performance. Additional tests reveal that earnings smoothing is not associated with perceived firm risk for firms with rich information environments and expanded risk disclosures. Furthermore, reporting smooth earnings decreases perceived firm risk following the global financial crisis. These findings are robust to alternative specifications and measures of earnings smoothing as well as post-filing perceived firm risk.
Research limitations/implications
This study does not distinguish between the garbling role and the informational role of earnings smoothing. The risk disclosure measurement used in this study, developed based on UK annual reports, may limit the generalizability of findings to other countries.
Practical implications
The findings suggest that managers should revise their risk disclosure strategies to provide in-depth details on firm risk. Investors might require information and thorough assessment to evaluate investment risks when firms provide generic risk disclosures and smoothed earnings by consulting sources like financial intermediaries. Regulators should keep an eye on firms reporting boilerplate risk disclosures and on how smoothing earnings impacts the firm perceived risk following economic turmoil, to guide interventions that promote market stability.
Originality/value
The findings provide new insights into when and how managers use their financial reporting discretion to make firms appear less risky and, therefore, influence investors’ risk perceptions.
Details
Keywords
Felix Endress, Julius Tiesler and Markus Zimmermann
Metal laser-powder-bed-fusion using laser-beam parts are particularly susceptible to contamination due to particles attached to the surface. This may compromise so-called…
Abstract
Purpose
Metal laser-powder-bed-fusion using laser-beam parts are particularly susceptible to contamination due to particles attached to the surface. This may compromise so-called technical cleanliness (e.g. in NASA RPTSTD-8070, ASTM G93, ISO 14952 or ISO 16232), which is important for many 3D-printed components, such as implants or liquid rocket engines. The purpose of the presented comparative study is to show how cleanliness is improved by design and different surface treatment methods.
Design/methodology/approach
Convex and concave test parts were designed, built and surface-treated by combinations of media blasting, electroless nickel plating and electrochemical polishing. After cleaning and analysing the technical cleanliness according to ASTM and ISO standards, effects on particle contamination, appearance, mass and dimensional accuracy are presented.
Findings
Contamination reduction factors are introduced for different particle sizes and surface treatment methods. Surface treatments were more effective for concave design features, however, the initial and resulting absolute particle contamination was higher. Results further indicate that there are trade-offs between cleanliness and other objectives in design. Design guidelines are introduced to solve conflicts in design when requirements for cleanliness exist.
Originality/value
This paper recommends designing parts and corresponding process chains for manufacturing simultaneously. Incorporating post-processing characteristics into the design phase is both feasible and essential. In the experimental study, electroless nickel plating in combination with prior glass bead blasting resulted in the lowest total remaining particle contamination. This process applied for cleanliness is a novelty, as well as a comparison between the different surface treatment methods.
Details
Keywords
Huijun Tu and Shitao Jin
Due to the complexity and diversity of megaprojects, the architectural programming process often involves multiple stakeholders, making decision-making difficult and susceptible…
Abstract
Purpose
Due to the complexity and diversity of megaprojects, the architectural programming process often involves multiple stakeholders, making decision-making difficult and susceptible to subjective factors. This study aims to propose an architectural programming methodology system (APMS) for megaprojects based on group decision-making model to enhance the accuracy and transparency of decision-making, and to facilitate participation and integration among stakeholders. This method allows multiple interest groups to participate in decision-making, gathers various perspectives and opinions, thereby improving the quality and efficiency of architectural programming and promoting the smooth implementation of projects.
Design/methodology/approach
This study first clarifies the decision-making subjects, decision objects, and decision methods of APMS based on group decision-making theory and value-based architectural programming methods. Furthermore, the entropy weight method and fuzzy TOPSIS method are employed as calculation methods to comprehensively evaluate decision alternatives and derive optimal decision conclusions. The workflow of APMS consists of four stages: preparation, information, decision, and evaluation, ensuring the scientific and systematic of the decision-making process.
Findings
This study conducted field research and empirical analysis on a practical megaproject of a comprehensive transport hub to verify the effectiveness of APMS. The results show that, in terms of both short-distance and long-distance transportation modes, the decision-making results of APMS are largely consistent with the preliminary programming outcomes of the project. However, regarding transfer modes, the APMS decision-making results revealed certain discrepancies between the project's current status and the preliminary programming.
Originality/value
APMS addresses the shortcomings in decision accuracy and stakeholder participation and integration in the current field of architectural programming. It not only enhances stakeholder participation and interaction but also considers various opinions and interests comprehensively. Additionally, APMS has significant potential in optimizing project performance, accelerating project processes, and reducing resource waste.
Details
Keywords
Osama Habbal, Ahmad Farhat, Reem Khalil and Christopher Pannier
The purpose of this study is to assess a novel method for creating tangible three-dimensional (3D) morphologies (scaled models) of neuronal reconstructions and to evaluate its…
Abstract
Purpose
The purpose of this study is to assess a novel method for creating tangible three-dimensional (3D) morphologies (scaled models) of neuronal reconstructions and to evaluate its cost-effectiveness, accessibility and applicability through a classroom survey. The study addresses the challenge of accurately representing intricate and diverse dendritic structures of neurons in scaled models for educational purposes.
Design/methodology/approach
The method involves converting neuronal reconstructions from the NeuromorphoVis repository into 3D-printable mold files. An operator prints these molds using a consumer-grade desktop 3D printer with water-soluble polyvinyl alcohol filament. The molds are then filled with casting materials like polyurethane or silicone rubber, before the mold is dissolved. We tested our method on various neuron morphologies, assessing the method’s effectiveness, labor, processing times and costs. Additionally, university biology students compared our 3D-printed neuron models with commercially produced counterparts through a survey, evaluating them based on their direct experience with both models.
Findings
An operator can produce a neuron morphology’s initial 3D replica in about an hour of labor, excluding a one- to three-day curing period, while subsequent copies require around 30 min each. Our method provides an affordable approach to crafting tangible 3D neuron representations, presenting a viable alternative to direct 3D printing with varied material options ensuring both flexibility and durability. The created models accurately replicate the fidelity and intricacy of original computer aided design (CAD) files, making them ideal for tactile use in neuroscience education.
Originality/value
The development of data processing and cost-effective casting method for this application is novel. Compared to a previous study, this method leverages lower-cost fused filament fabrication 3D printing to create accurate physical 3D representations of neurons. By using readily available materials and a consumer-grade 3D printer, the research addresses the high cost associated with alternative direct 3D printing techniques to produce such intricate and robust models. Furthermore, the paper demonstrates the practicality of these 3D neuron models for educational purposes, making a valuable contribution to the field of neuroscience education.
Details