Search results
1 – 10 of over 3000Kiia Aurora Einola, Laura Remes and Kenneth Dooley
This study aims to explore an emerging collection of smart building technologies, known as smart workplace solutions (SWS), in the context of facilities management (FM).
Abstract
Purpose
This study aims to explore an emerging collection of smart building technologies, known as smart workplace solutions (SWS), in the context of facilities management (FM).
Design/methodology/approach
This study is based on semi-structured interviews with facility managers in Finland, Norway and Sweden who have deployed SWSs in their organizations. SWS features, based on empirical data from a previous study, were also used to further analyse the interviews.
Findings
It analyses the benefits that SWSs bring from the facility management point of view. It is clear that the impetus for change and for deploying SWS in the context of FM is primarily driven by cost savings related to reductions in office space.
Research limitations/implications
This research has been conducted with a focus on office buildings only. However, other building types can learn from the benefits that facility managers receive in the area of user-centred smart buildings.
Practical implications
SWSs are often seen as employee experience solutions that are only related to “soft” elements such as collaboration, innovation and learning. Understanding the FM business case can help make a more practical case for their deployment.
Originality/value
SWSs are an emerging area, and this study has collected data from facility managers who use them daily.
Details
Keywords
Yandong Hou, Zhengbo Wu, Xinghua Ren, Kaiwen Liu and Zhengquan Chen
High-resolution remote sensing images possess a wealth of semantic information. However, these images often contain objects of different sizes and distributions, which make the…
Abstract
Purpose
High-resolution remote sensing images possess a wealth of semantic information. However, these images often contain objects of different sizes and distributions, which make the semantic segmentation task challenging. In this paper, a bidirectional feature fusion network (BFFNet) is designed to address this challenge, which aims at increasing the accurate recognition of surface objects in order to effectively classify special features.
Design/methodology/approach
There are two main crucial elements in BFFNet. Firstly, the mean-weighted module (MWM) is used to obtain the key features in the main network. Secondly, the proposed polarization enhanced branch network performs feature extraction simultaneously with the main network to obtain different feature information. The authors then fuse these two features in both directions while applying a cross-entropy loss function to monitor the network training process. Finally, BFFNet is validated on two publicly available datasets, Potsdam and Vaihingen.
Findings
In this paper, a quantitative analysis method is used to illustrate that the proposed network achieves superior performance of 2–6%, respectively, compared to other mainstream segmentation networks from experimental results on two datasets. Complete ablation experiments are also conducted to demonstrate the effectiveness of the elements in the network. In summary, BFFNet has proven to be effective in achieving accurate identification of small objects and in reducing the effect of shadows on the segmentation process.
Originality/value
The originality of the paper is the proposal of a BFFNet based on multi-scale and multi-attention strategies to improve the ability to accurately segment high-resolution and complex remote sensing images, especially for small objects and shadow-obscured objects.
Details
Keywords
Mohd Mustaqeem, Suhel Mustajab and Mahfooz Alam
Software defect prediction (SDP) is a critical aspect of software quality assurance, aiming to identify and manage potential defects in software systems. In this paper, we have…
Abstract
Purpose
Software defect prediction (SDP) is a critical aspect of software quality assurance, aiming to identify and manage potential defects in software systems. In this paper, we have proposed a novel hybrid approach that combines Gray Wolf Optimization with Feature Selection (GWOFS) and multilayer perceptron (MLP) for SDP. The GWOFS-MLP hybrid model is designed to optimize feature selection, ultimately enhancing the accuracy and efficiency of SDP. Gray Wolf Optimization, inspired by the social hierarchy and hunting behavior of gray wolves, is employed to select a subset of relevant features from an extensive pool of potential predictors. This study investigates the key challenges that traditional SDP approaches encounter and proposes promising solutions to overcome time complexity and the curse of the dimensionality reduction problem.
Design/methodology/approach
The integration of GWOFS and MLP results in a robust hybrid model that can adapt to diverse software datasets. This feature selection process harnesses the cooperative hunting behavior of wolves, allowing for the exploration of critical feature combinations. The selected features are then fed into an MLP, a powerful artificial neural network (ANN) known for its capability to learn intricate patterns within software metrics. MLP serves as the predictive engine, utilizing the curated feature set to model and classify software defects accurately.
Findings
The performance evaluation of the GWOFS-MLP hybrid model on a real-world software defect dataset demonstrates its effectiveness. The model achieves a remarkable training accuracy of 97.69% and a testing accuracy of 97.99%. Additionally, the receiver operating characteristic area under the curve (ROC-AUC) score of 0.89 highlights the model’s ability to discriminate between defective and defect-free software components.
Originality/value
Experimental implementations using machine learning-based techniques with feature reduction are conducted to validate the proposed solutions. The goal is to enhance SDP’s accuracy, relevance and efficiency, ultimately improving software quality assurance processes. The confusion matrix further illustrates the model’s performance, with only a small number of false positives and false negatives.
Details
Keywords
Luís Jacques de Sousa, João Poças Martins and Luís Sanhudo
Factors like bid price, submission time, and number of bidders influence the procurement process in public projects. These factors and the award criteria may impact the project’s…
Abstract
Purpose
Factors like bid price, submission time, and number of bidders influence the procurement process in public projects. These factors and the award criteria may impact the project’s financial compliance. Predicting budget compliance in construction projects has been traditionally challenging, but Machine Learning (ML) techniques have revolutionised estimations.
Design/methodology/approach
In this study, Portuguese Public Procurement Data (PPPData) was utilised as the model’s input. Notably, this dataset exhibited a substantial imbalance in the target feature. To address this issue, the study evaluated three distinct data balancing techniques: oversampling, undersampling, and the SMOTE method. Next, a comprehensive feature selection process was conducted, leading to the testing of five different algorithms for forecasting budget compliance. Finally, a secondary test was conducted, refining the features to include only those elements that procurement technicians can modify while also considering the two most accurate predictors identified in the previous test.
Findings
The findings indicate that employing the SMOTE method on the scraped data can achieve a balanced dataset. Furthermore, the results demonstrate that the Adam ANN algorithm outperformed others, boasting a precision rate of 68.1%.
Practical implications
The model can aid procurement technicians during the tendering phase by using historical data and analogous projects to predict performance.
Social implications
Although the study reveals that ML algorithms cannot accurately predict budget compliance using procurement data, they can still provide project owners with insights into the most suitable criteria, aiding decision-making. Further research should assess the model’s impact and capacity within the procurement workflow.
Originality/value
Previous research predominantly focused on forecasting budgets by leveraging data from the private construction execution phase. While some investigations incorporated procurement data, this study distinguishes itself by using an imbalanced dataset and anticipating compliance rather than predicting budgetary figures. The model predicts budget compliance by analysing qualitative and quantitative characteristics of public project contracts. The research paper explores various model architectures and data treatment techniques to develop a model to assist the Client in tender definition.
Details
Keywords
Gautam Srivastava and Surajit Bag
Data-driven marketing is replacing conventional marketing strategies. The modern marketing strategy is based on insights derived from customer behavior information gathered from…
Abstract
Purpose
Data-driven marketing is replacing conventional marketing strategies. The modern marketing strategy is based on insights derived from customer behavior information gathered from their facial expressions and neuro-signals. This study explores the potential for face recognition and neuro-marketing in modern-day marketing.
Design/methodology/approach
The study conducts an in-depth examination of the extant literature on neuro-marketing and facial recognition marketing. The articles for review are downloaded from the Scopus database, and PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) is then used to screen and choose the relevant papers. The systematic literature review method is applied to conduct the study.
Findings
An extensive review of the literature reveals that the domains of neuro-marketing and face recognition marketing remain understudied. The authors’ review of selected papers delivers five neuro-marketing and facial recognition marketing themes that are essential to modern marketing concepts.
Practical implications
Neuro-marketing and facial recognition marketing are artificial intelligence (AI)-enabled marketing techniques that assist in gaining cognitive insights into human behavior. The findings would be of use to managers in designing marketing strategies to enhance their marketing approach and boost conversion rates.
Originality/value
The uniqueness of this study lies in that it provides an updated review on neuro-marketing and face recognition marketing.
Details
Keywords
Zoltán Pápai, Péter Nagy and Aliz McLean
This study aims to estimate the quality-adjusted changes in residential mobile consumer prices by controlling for the changes in the relevant service characteristics and quality…
Abstract
Purpose
This study aims to estimate the quality-adjusted changes in residential mobile consumer prices by controlling for the changes in the relevant service characteristics and quality, in a case study on Hungary between 2015 and 2021; compare the results with changes measured by the traditionally calculated official telecommunications price index of the Statistical Office; and discuss separating the hedonic price changes from the effect of a specific government intervention that occurred in Hungary, namely, the significant reduction in the value added tax rate (VAT) levied on internet services.
Design/methodology/approach
Since the price of commercial mobile offers does not directly reflect the continuous improvements in service characteristics and functionalities over time, the price changes need to be adjusted for changes in quality. The authors use hedonic regression analysis to address this issue.
Findings
The results show significant hedonic price changes over the observed seven-year period of over 30%, which turns out to be primarily driven by the significant developments in the comprising service characteristics and not the VAT policy change.
Originality/value
This paper contributes to the literature on hedonic price analyses on complex telecommunications service plans and enhances this methodology by using weights and analysing the content-related features of the mobile packages.
Details
Keywords
Givemore Muchenje, Marko Seppänen and Hongxiu Li
The study explores the extent to which business analytics can address business problems using the task-technology fit theory.
Abstract
Purpose
The study explores the extent to which business analytics can address business problems using the task-technology fit theory.
Design/methodology/approach
The qualitative research approach of pattern matching was adopted for data analysis and 12 semi-structured interviews were conducted. Four propositions derived from the literature on task-technology fit are compared to emerging core themes from the empirical data.
Findings
The study establishes the relationships between various forms of fit, arguing that the iterative application of business analytics improves problem understanding and solutions, and contends that both under-fit and over-fit can be acceptable due to the increasing costs of achieving ideal fit and potential unaffected outcomes, respectively. The study demonstrates that managers should appreciate that there may be a distinction between those who create business analytics solutions and those who apply business analytics solutions to solve problems.
Originality/value
Extant studies on business analytics have not focused on how the match between business analytics and tasks affects the level to which problems can be addressed that determines business value. This study enriches the literature on business analytics by linking business analytics and business value through problem resolution demonstrated by task-technology fit. To the authors’ knowledge, this study might be the first to apply pattern matching to study the fit between technology and tasks.
Details
Keywords
Ambica Ghai, Pradeep Kumar and Samrat Gupta
Web users rely heavily on online content make decisions without assessing the veracity of the content. The online content comprising text, image, video or audio may be tampered…
Abstract
Purpose
Web users rely heavily on online content make decisions without assessing the veracity of the content. The online content comprising text, image, video or audio may be tampered with to influence public opinion. Since the consumers of online information (misinformation) tend to trust the content when the image(s) supplement the text, image manipulation software is increasingly being used to forge the images. To address the crucial problem of image manipulation, this study focusses on developing a deep-learning-based image forgery detection framework.
Design/methodology/approach
The proposed deep-learning-based framework aims to detect images forged using copy-move and splicing techniques. The image transformation technique aids the identification of relevant features for the network to train effectively. After that, the pre-trained customized convolutional neural network is used to train on the public benchmark datasets, and the performance is evaluated on the test dataset using various parameters.
Findings
The comparative analysis of image transformation techniques and experiments conducted on benchmark datasets from a variety of socio-cultural domains establishes the effectiveness and viability of the proposed framework. These findings affirm the potential applicability of proposed framework in real-time image forgery detection.
Research limitations/implications
This study bears implications for several important aspects of research on image forgery detection. First this research adds to recent discussion on feature extraction and learning for image forgery detection. While prior research on image forgery detection, hand-crafted the features, the proposed solution contributes to stream of literature that automatically learns the features and classify the images. Second, this research contributes to ongoing effort in curtailing the spread of misinformation using images. The extant literature on spread of misinformation has prominently focussed on textual data shared over social media platforms. The study addresses the call for greater emphasis on the development of robust image transformation techniques.
Practical implications
This study carries important practical implications for various domains such as forensic sciences, media and journalism where image data is increasingly being used to make inferences. The integration of image forgery detection tools can be helpful in determining the credibility of the article or post before it is shared over the Internet. The content shared over the Internet by the users has become an important component of news reporting. The framework proposed in this paper can be further extended and trained on more annotated real-world data so as to function as a tool for fact-checkers.
Social implications
In the current scenario wherein most of the image forgery detection studies attempt to assess whether the image is real or forged in an offline mode, it is crucial to identify any trending or potential forged image as early as possible. By learning from historical data, the proposed framework can aid in early prediction of forged images to detect the newly emerging forged images even before they occur. In summary, the proposed framework has a potential to mitigate physical spreading and psychological impact of forged images on social media.
Originality/value
This study focusses on copy-move and splicing techniques while integrating transfer learning concepts to classify forged images with high accuracy. The synergistic use of hitherto little explored image transformation techniques and customized convolutional neural network helps design a robust image forgery detection framework. Experiments and findings establish that the proposed framework accurately classifies forged images, thus mitigating the negative socio-cultural spread of misinformation.
Details
Keywords
Vaishali Rajput, Preeti Mulay and Chandrashekhar Madhavrao Mahajan
Nature’s evolution has shaped intelligent behaviors in creatures like insects and birds, inspiring the field of Swarm Intelligence. Researchers have developed bio-inspired…
Abstract
Purpose
Nature’s evolution has shaped intelligent behaviors in creatures like insects and birds, inspiring the field of Swarm Intelligence. Researchers have developed bio-inspired algorithms to address complex optimization problems efficiently. These algorithms strike a balance between computational efficiency and solution optimality, attracting significant attention across domains.
Design/methodology/approach
Bio-inspired optimization techniques for feature engineering and its applications are systematically reviewed with chief objective of assessing statistical influence and significance of “Bio-inspired optimization”-based computational models by referring to vast research literature published between year 2015 and 2022.
Findings
The Scopus and Web of Science databases were explored for review with focus on parameters such as country-wise publications, keyword occurrences and citations per year. Springer and IEEE emerge as the most creative publishers, with indicative prominent and superior journals, namely, PLoS ONE, Neural Computing and Applications, Lecture Notes in Computer Science and IEEE Transactions. The “National Natural Science Foundation” of China and the “Ministry of Electronics and Information Technology” of India lead in funding projects in this area. China, India and Germany stand out as leaders in publications related to bio-inspired algorithms for feature engineering research.
Originality/value
The review findings integrate various bio-inspired algorithm selection techniques over a diverse spectrum of optimization techniques. Anti colony optimization contributes to decentralized and cooperative search strategies, bee colony optimization (BCO) improves collaborative decision-making, particle swarm optimization leads to exploration-exploitation balance and bio-inspired algorithms offer a range of nature-inspired heuristics.
Details
Keywords
Hui-Min Lai, Shin-Yuan Hung and David C. Yen
Seekers who visit professional virtual communities (PVCs) are usually motivated by knowledge-seeking, which is a complex cognitive process. How do seekers search for knowledge…
Abstract
Purpose
Seekers who visit professional virtual communities (PVCs) are usually motivated by knowledge-seeking, which is a complex cognitive process. How do seekers search for knowledge, and how is their search linked to prior knowledge or PVC situation factors? From the cognitive process and interactional psychology perspectives, this study investigated the three-way interactions between seekers’ expertise, task complexity, and perceptions of PVC features (i.e. knowledge quality and system quality) on knowledge-seeking strategies and resultant outcomes.
Design/methodology/approach
A field experiment was conducted with 119 seekers in a PVC using a 2 × 2 factorial design of seekers’ expertise (i.e. expert versus novice) and task complexity (i.e. low versus high).
Findings
The study reveals three significant insights: (1) For a high-complexity task, experts adopt an ask-directed searching strategy compared to novices, whereas novices adopt a browsing strategy; (2) For a high-complexity task, experts who perceive a high system quality are more likely than novices to adopt an ask-directed searching strategy; and (3) Task completion time and task quality are associated with the adoption of ask-directed searching strategies, whereas knowledge seekers’ satisfaction is more associated with the adoption of browsing strategy.
Originality/value
We draw on the perspectives of cognitive process and interactional psychology to explore potential two- and three-way interactions of seekers’ expertise, task complexity, and PVC features on the adoption of knowledge-seeking strategies in a PVC context. Our findings provide deep insights into seekers’ behavior in a PVC, given the popularity of the search for knowledge in PVCs.
Details