Search results
1 – 10 of 115Hui-Min Lai, Shin-Yuan Hung and David C. Yen
Seekers who visit professional virtual communities (PVCs) are usually motivated by knowledge-seeking, which is a complex cognitive process. How do seekers search for knowledge…
Abstract
Purpose
Seekers who visit professional virtual communities (PVCs) are usually motivated by knowledge-seeking, which is a complex cognitive process. How do seekers search for knowledge, and how is their search linked to prior knowledge or PVC situation factors? From the cognitive process and interactional psychology perspectives, this study investigated the three-way interactions between seekers’ expertise, task complexity, and perceptions of PVC features (i.e. knowledge quality and system quality) on knowledge-seeking strategies and resultant outcomes.
Design/methodology/approach
A field experiment was conducted with 119 seekers in a PVC using a 2 × 2 factorial design of seekers’ expertise (i.e. expert versus novice) and task complexity (i.e. low versus high).
Findings
The study reveals three significant insights: (1) For a high-complexity task, experts adopt an ask-directed searching strategy compared to novices, whereas novices adopt a browsing strategy; (2) For a high-complexity task, experts who perceive a high system quality are more likely than novices to adopt an ask-directed searching strategy; and (3) Task completion time and task quality are associated with the adoption of ask-directed searching strategies, whereas knowledge seekers’ satisfaction is more associated with the adoption of browsing strategy.
Originality/value
We draw on the perspectives of cognitive process and interactional psychology to explore potential two- and three-way interactions of seekers’ expertise, task complexity, and PVC features on the adoption of knowledge-seeking strategies in a PVC context. Our findings provide deep insights into seekers’ behavior in a PVC, given the popularity of the search for knowledge in PVCs.
Details
Keywords
Hei-Chia Wang, Martinus Maslim and Hung-Yu Liu
A clickbait is a deceptive headline designed to boost ad revenue without presenting closely relevant content. There are numerous negative repercussions of clickbait, such as…
Abstract
Purpose
A clickbait is a deceptive headline designed to boost ad revenue without presenting closely relevant content. There are numerous negative repercussions of clickbait, such as causing viewers to feel tricked and unhappy, causing long-term confusion, and even attracting cyber criminals. Automatic detection algorithms for clickbait have been developed to address this issue. The fact that there is only one semantic representation for the same term and a limited dataset in Chinese is a need for the existing technologies for detecting clickbait. This study aims to solve the limitations of automated clickbait detection in the Chinese dataset.
Design/methodology/approach
This study combines both to train the model to capture the probable relationship between clickbait news headlines and news content. In addition, part-of-speech elements are used to generate the most appropriate semantic representation for clickbait detection, improving clickbait detection performance.
Findings
This research successfully compiled a dataset containing up to 20,896 Chinese clickbait news articles. This collection contains news headlines, articles, categories and supplementary metadata. The suggested context-aware clickbait detection (CA-CD) model outperforms existing clickbait detection approaches on many criteria, demonstrating the proposed strategy's efficacy.
Originality/value
The originality of this study resides in the newly compiled Chinese clickbait dataset and contextual semantic representation-based clickbait detection approach employing transfer learning. This method can modify the semantic representation of each word based on context and assist the model in more precisely interpreting the original meaning of news articles.
Details
Keywords
Adequate means for easily viewing, browsing and searching knowledge graphs (KGs) are a crucial, still limiting factor. Therefore, this paper aims to present virtual properties as…
Abstract
Purpose
Adequate means for easily viewing, browsing and searching knowledge graphs (KGs) are a crucial, still limiting factor. Therefore, this paper aims to present virtual properties as valuable user interface (UI) concept for ontologies and KGs able to improve these issues. Virtual properties provide shortcuts on a KG that can enrich the scope of a class with other information beyond its direct neighborhood.
Design/methodology/approach
Virtual properties can be defined as enhancements of shapes constraint language (SHACL) property shapes. Their values are computed on demand via protocol and RDF query language (SPARQL) queries. An approach is demonstrated that can help to identify suitable virtual property candidates. Virtual properties can be realized as integral functionality of generic, frame-based UIs, which can automatically provide views and masks for viewing and searching a KG.
Findings
The virtual property approach has been implemented at Bosch and is usable by more than 100,000 Bosch employees in a productive deployment, which proves the maturity and relevance of the approach for Bosch. It has successfully been demonstrated that virtual properties can significantly improve KG UIs by enriching the scope of a class with information beyond its direct neighborhood.
Originality/value
SHACL-defined virtual properties and their automatic identification are a novel concept. To the best of the author’s knowledge, no such approach has been established nor standardized so far.
Details
Keywords
After completion of the case study, students will learn to use Lean Canvas to identify business opportunity. They will also learn the balancing of exploitation of profit-producing…
Abstract
Learning outcomes
After completion of the case study, students will learn to use Lean Canvas to identify business opportunity. They will also learn the balancing of exploitation of profit-producing activities and exploring new opportunities according to the environmental dynamism.
Case overview/synopsis
WONK, a tutor discovery and booking app was launched by MyEdge in 2016 to search and book verified tutors in locations served by the company. Based on their requirements, parents and students could sort and book verified tutors in their area. Through the app, users could search for academic and hobby classes in the form of individual tuitions. The ease of use and the service offering made it a popular app with students enrolling every 6 min. Within a span of six years, WONK had provided services to thousands of students in 20+ countries and had 200,000+ tutors registered on their app from 15,000+ pin codes. Despite a plethora of Edtech companies in India, a different business model and services offered gave them an edge over other Edtech companies. To keep up with the customer needs, they were constantly making the upgrades to their technology and expanding their services. Vidhu Goyal, the founder of the company, was enjoying the progress when another development in the technology hit the world. With the launch of applications based on artificial intelligence, will it disrupt the business or not?
Complexity academic level
The case study is recommended to be taught in a 90-min class to Master of Business Administration students. The case study may be used in courses related to strategy, information systems management and entrepreneurship.
Supplementary materials
Teaching notes are available for educators only.
Subject code
CSS 11: Strategy.
Details
Keywords
Xiaoling Li, Zongshu Wu, Qing Huang and Juanyi Liu
This study develops an empirical framework to address how large third-party sellers (TPSs) can apply customer acquisition strategies to improve their performance in consumers’…
Abstract
Purpose
This study develops an empirical framework to address how large third-party sellers (TPSs) can apply customer acquisition strategies to improve their performance in consumers’ person-goods matching process and how the platform firm’s similar strategies moderate the effects of TPSs’ strategies.
Design/methodology/approach
Using data collected from the top ten TPSs from a Chinese e-commerce platform, the fixed effect model is used to validate the conceptual model and hypotheses.
Findings
The study results show that both market detection strategy and matching optimization strategy can help large TPSs improve their sales performance. Moreover, the similar market detection strategy applied by the platform firm weakens the effect of large TPSs’ customer acquisition strategies, while the similar matching optimization strategy applied by the platform firm strengthens the effect of large TPSs’ customer acquisition strategies.
Originality/value
This study provides firsthand evidence on the performance of large TPSs’ and the platform firm’s strategies. It demonstrates the effectiveness of large TPSs’ market detection strategy and matching optimization strategy, which can be adopted to meet consumers’ search and evaluation motivations in their person-goods matching process respectively. Moreover, it identifies the role of platform firms by showing the moderating effect of similar strategies adopted by the platform firm on the effect of large TPSs’ customer acquisition strategies.
Details
Keywords
Sihao Li, Jiali Wang and Zhao Xu
The compliance checking of Building Information Modeling (BIM) models is crucial throughout the lifecycle of construction. The increasing amount and complexity of information…
Abstract
Purpose
The compliance checking of Building Information Modeling (BIM) models is crucial throughout the lifecycle of construction. The increasing amount and complexity of information carried by BIM models have made compliance checking more challenging, and manual methods are prone to errors. Therefore, this study aims to propose an integrative conceptual framework for automated compliance checking of BIM models, allowing for the identification of errors within BIM models.
Design/methodology/approach
This study first analyzed the typical building standards in the field of architecture and fire protection, and then the ontology of these elements is developed. Based on this, a building standard corpus is built, and deep learning models are trained to automatically label the building standard texts. The Neo4j is utilized for knowledge graph construction and storage, and a data extraction method based on the Dynamo is designed to obtain checking data files. After that, a matching algorithm is devised to express the logical rules of knowledge graph triples, resulting in automated compliance checking for BIM models.
Findings
Case validation results showed that this theoretical framework can achieve the automatic construction of domain knowledge graphs and automatic checking of BIM model compliance. Compared with traditional methods, this method has a higher degree of automation and portability.
Originality/value
This study introduces knowledge graphs and natural language processing technology into the field of BIM model checking and completes the automated process of constructing domain knowledge graphs and checking BIM model data. The validation of its functionality and usability through two case studies on a self-developed BIM checking platform.
Details
Keywords
Priya Garg and Shivarama Rao K.
This paper aims to discuss the process of building a 24×7 reference platform for facilitating the farmers with the easy access of information at any time from any location. It…
Abstract
Purpose
This paper aims to discuss the process of building a 24×7 reference platform for facilitating the farmers with the easy access of information at any time from any location. It takes the text string as input and process it to respond with the desired result to the user.
Design/methodology/approach
An interactive Web-based chatbot named as AgriRef was developed using free version of Dialogflow. The intents were defined based on the conversation flow diagram. Furthermore, the application was integrated with website on local server and telegram application.
Findings
With this chatbot application, the farmers will able to get answers of their queries. It provides the human-like conversational interface to the farmers. It will also be useful for librarians of agricultural libraries to save time in answering common queries.
Originality/value
This paper describes the various steps involved in developing the chatbot application using Dialogflow.
Details
Keywords
Somayeh Tamjid, Fatemeh Nooshinfard, Molouk Sadat Hosseini Beheshti, Nadjla Hariri and Fahimeh Babalhavaeji
The purpose of this study is to develop a domain independent, cost-effective, time-saving and semi-automated ontology generation framework that could extract taxonomic concepts…
Abstract
Purpose
The purpose of this study is to develop a domain independent, cost-effective, time-saving and semi-automated ontology generation framework that could extract taxonomic concepts from unstructured text corpus. In the human disease domain, ontologies are found to be extremely useful for managing the diversity of technical expressions in favour of information retrieval objectives. The boundaries of these domains are expanding so fast that it is essential to continuously develop new ontologies or upgrade available ones.
Design/methodology/approach
This paper proposes a semi-automated approach that extracts entities/relations via text mining of scientific publications. Text mining-based ontology (TmbOnt)-named code is generated to assist a user in capturing, processing and establishing ontology elements. This code takes a pile of unstructured text files as input and projects them into high-valued entities or relations as output. As a semi-automated approach, a user supervises the process, filters meaningful predecessor/successor phrases and finalizes the demanded ontology-taxonomy. To verify the practical capabilities of the scheme, a case study was performed to drive glaucoma ontology-taxonomy. For this purpose, text files containing 10,000 records were collected from PubMed.
Findings
The proposed approach processed over 3.8 million tokenized terms of those records and yielded the resultant glaucoma ontology-taxonomy. Compared with two famous disease ontologies, TmbOnt-driven taxonomy demonstrated a 60%–100% coverage ratio against famous medical thesauruses and ontology taxonomies, such as Human Disease Ontology, Medical Subject Headings and National Cancer Institute Thesaurus, with an average of 70% additional terms recommended for ontology development.
Originality/value
According to the literature, the proposed scheme demonstrated novel capability in expanding the ontology-taxonomy structure with a semi-automated text mining approach, aiming for future fully-automated approaches.
Details
Keywords
Chuyu Tang, Hao Wang, Genliang Chen and Shaoqiu Xu
This paper aims to propose a robust method for non-rigid point set registration, using the Gaussian mixture model and accommodating non-rigid transformations. The posterior…
Abstract
Purpose
This paper aims to propose a robust method for non-rigid point set registration, using the Gaussian mixture model and accommodating non-rigid transformations. The posterior probabilities of the mixture model are determined through the proposed integrated feature divergence.
Design/methodology/approach
The method involves an alternating two-step framework, comprising correspondence estimation and subsequent transformation updating. For correspondence estimation, integrated feature divergences including both global and local features, are coupled with deterministic annealing to address the non-convexity problem of registration. For transformation updating, the expectation-maximization iteration scheme is introduced to iteratively refine correspondence and transformation estimation until convergence.
Findings
The experiments confirm that the proposed registration approach exhibits remarkable robustness on deformation, noise, outliers and occlusion for both 2D and 3D point clouds. Furthermore, the proposed method outperforms existing analogous algorithms in terms of time complexity. Application of stabilizing and securing intermodal containers loaded on ships is performed. The results demonstrate that the proposed registration framework exhibits excellent adaptability for real-scan point clouds, and achieves comparatively superior alignments in a shorter time.
Originality/value
The integrated feature divergence, involving both global and local information of points, is proven to be an effective indicator for measuring the reliability of point correspondences. This inclusion prevents premature convergence, resulting in more robust registration results for our proposed method. Simultaneously, the total operating time is reduced due to a lower number of iterations.
Details
Keywords
Xiaohong Shi, Ziyan Wang, Runlu Zhong, Liangliang Ma, Xiangping Chen and Peng Yang
Smart contracts are written in high-level programming languages, compiled into Ethereum Virtual Machine (EVM) bytecode, deployed onto blockchain systems and called with the…
Abstract
Purpose
Smart contracts are written in high-level programming languages, compiled into Ethereum Virtual Machine (EVM) bytecode, deployed onto blockchain systems and called with the corresponding address by transactions. The deployed smart contracts are immutable, even if there are bugs or vulnerabilities. Therefore, it is critical to verify smart contracts before deployment. This paper aims to help developers effectively and efficiently locate potential defects in smart contracts.
Design/methodology/approach
GethReplayer, a smart contract testing method based on transaction replay, is proposed. It constructs a parallel transaction execution environment with two virtual machines to compare the execution results. It uses the real existing transaction data on Ethereum and the source code of the tested smart contacts as inputs, conditionally substitutes the bytecode of the tested smart contract input into the testing EVM, and then monitors the environmental information to check the correctness of the contract.
Findings
Experiments verified that the proposed method is effective in smart contract testing. Virtual environmental information has a significant effect on the success of transaction replay, which is the basis for the performance of the method. The efficiency of error locating was approximately 14 times faster with the proposed method than without. In addition, the proposed method supports gas consumption analysis.
Originality/value
This paper addresses the difficulty that developers encounter in testing smart contracts before deployment and focuses on helping develop smart contracts with as few defects as possible. GethReplayer is expected to be an alternative solution for smart contract testing and provide inspiration for further research.
Details