Search results
1 – 10 of over 148000Ming Li and Jing Liang
Knowledge adoption is the key to effective knowledge exchange in virtual question-and-answer (Q&A) communities. Although previous studies have examined the effects of knowledge…
Abstract
Purpose
Knowledge adoption is the key to effective knowledge exchange in virtual question-and-answer (Q&A) communities. Although previous studies have examined the effects of knowledge content, knowledge source credibility and the personal characteristics of knowledge seekers on knowledge adoption in virtual Q&A communities from a static perspective, the impact of answer deviation on knowledge adoption has rarely been explored from a context-based perspective. The purpose of this study is to explore the impact of two-way deviation on knowledge adoption in virtual Q&A communities, with the aim of expanding the understanding of knowledge exchange and community management.
Design/methodology/approach
The same question and the same answerer often yield multiple answers. Knowledge seekers usually read multiple answers to make adoption decisions. The impact of deviations among answers on knowledge seekers' knowledge adoption is critical. From a context-based perspective, a research model of the impact of the deviation of horizontal and vertical answers on knowledge adoption is established based on the heuristic-systematic model (HSM) and empirically examined with 88,287 Q&A data points and answerer data collected from Zhihu. Additionally, the moderation effects of static factors such as answerer reputation and answer length are examined.
Findings
The negative binomial regression results show that the content and emotion deviation of horizontal answers negatively affect knowledge seekers' knowledge adoption. The content deviation of vertical answers is negatively associated with knowledge adoption, while the emotion deviation of vertical answers is positively related to knowledge adoption. Moreover, answerer reputation positively moderates the negative effect of the emotion deviation of horizontal answers on knowledge adoption. Answer length weakens the negative correlation between the content deviation of horizontal and vertical answers and knowledge adoption.
Originality/value
This study extends previous research on knowledge adoption from a static perspective to a context-based perspective. Moreover, information deviation is expanded from a one-way variable to a two-way variable. The combined effects of static and contextual factors on knowledge adoption are further uncovered. This study can not only help knowledge seekers identify the best answers but also help virtual Q&A community managers optimize community design and operation to reduce the cost of knowledge search and improve the efficiency of knowledge exchange.
Details
Keywords
Chencheng Shi, Ping Hu, Weiguo Fan and Liangfei Qiu
Users' knowledge contribution behaviors are critical for online Q&A communities to thrive. Well-organized question threads in online Q&A communities enable users to clearly read…
Abstract
Purpose
Users' knowledge contribution behaviors are critical for online Q&A communities to thrive. Well-organized question threads in online Q&A communities enable users to clearly read existing answers and their evaluations before contributing. Based on the social comparison and peer influence literature, the authors examine peer influence on the informativeness of knowledge contributions in competitive settings. The authors also consider three levels of moderating factors concerning individuals' perception of competitiveness: question level, thread level and contributor level.
Design/methodology/approach
The authors collected data from one of the largest online Q&A communities in China. The hypotheses were validated using hierarchical linear models with cross-classified random effects. The generalized propensity score weighting method was employed for the robustness check.
Findings
The authors demonstrate the peer influence due to social comparison concerns among knowledge contribution behaviors in the same question thread. If more prior knowledge contributors choose to contribute long answers in the question thread, the subsequent contributions are more informative. This peer influence is stronger for factual questions and questions with higher popularity of answering but weaker in recommendation-type and well-answered questions and for contributors with higher social status.
Originality/value
This research provides a new cue of peer influence on online UGC contributions in competitive settings initiated by social comparison concerns. Additionally, the authors identify three levels of moderating factors (question level, thread level and contributor level) that are specific to online Q&A settings and are related to a contributor's perception of competitiveness, which affect the direct effect of peer influence on knowledge contributions. Rather than focus on motivation and quality evaluation, the authors concentrate on the specific content of online knowledge contributions. Peer influence here is not based on an actual acquaintance or a following relationship but on answering the same question. The authors also illustrate the competitive peer influence in subjective and personalized behaviors in online UGC communities.
Details
Keywords
Mi Zhou, Bo Meng and Weiguo Fan
The current study aims to investigate the factors that impact the feedback received on answers to questions in social Q&A communities and whether the expertise-required question…
Abstract
Purpose
The current study aims to investigate the factors that impact the feedback received on answers to questions in social Q&A communities and whether the expertise-required question influences the role of these factors on the feedback.
Design/methodology/approach
To understand the antecedents and consequences that influence the feedback received on answers to online community questions, the elaboration likelihood model (ELM) is applied in this study. The authors use web data crawling methods and a combination of quantitative analyses. The data for this study came from Zhihu; in total, 353,775 responses were obtained to 1,531 questions, ranging from 49 to 23,681 responses per question. Each answer received 0 to 113,892 likes and 0 to 6,250 comments.
Findings
The answers' cognitive and emotional components and the answerer's influence positively affect user feedback behavior. In addition, the expertise-required question moderates the effects of the answer's cognitive component and emotional component on the user feedback, moderating the effects of the answerer's influence on the user approval feedback.
Originality/value
This study builds upon a limited yet growing body of literature on a theme of great relevance to scholars, practitioners and social media users concerning the effects of the connotation of answers (i.e. their cognitive and emotional components) and the answerer's influence on user feedback (i.e. approval and collaborative feedback) in social Q&A communities. The authors further consider the moderating role of the domain expertise required by the question (expertise-required question). The ELM model is applied to explore the relationships between questions, answers and feedback. The findings of this study add a new perspective to the research on user feedback and have implications for the management of social Q&A communities.
Details
Keywords
Mitali Desai, Rupa G. Mehta and Dipti P. Rana
Scholarly communications, particularly, questions and answers (Q&A) present on digital scholarly platforms provide a new avenue to gain knowledge. However, several studies have…
Abstract
Purpose
Scholarly communications, particularly, questions and answers (Q&A) present on digital scholarly platforms provide a new avenue to gain knowledge. However, several studies have raised a concern about the content anomalies in these Q&A and suggested a proper validation before utilizing them in scholarly applications such as influence analysis and content-based recommendation systems. The content anomalies are referred as disinformation in this research. The purpose of this research is firstly, to assess scholarly communications in order to identify disinformation and secondly, to help scholarly platforms determine the scholars who probably disseminate such disinformation. These scholars are referred as the probable sources of disinformation.
Design/methodology/approach
To identify disinformation, the proposed model deduces (1) content redundancy and contextual redundancy in questions (2) contextual nonrelevance in answers with respect to the questions and (3) quality of answers with respect to the expertise of the answering scholars. Then, the model determines the probable sources of disinformation using the statistical analysis.
Findings
The model is evaluated on ResearchGate (RG) data. Results suggest that the model efficiently identifies disinformation from scholarly communications and accurately detects the probable sources of disinformation.
Practical implications
Different platforms with communication portals can use this model as a regulatory mechanism to restrict the prorogation of disinformation. Scholarly platforms can use this model to generate an accurate influence assessment mechanism and also relevant recommendations for their scholars.
Originality/value
The existing studies majorly deal with validating the answers using statistical measures. The proposed model focuses on questions as well as answers and performs a contextual analysis using an advanced word embedding technique.
Details
Keywords
Yung-Ting Chuang and Ching-Hsien Wang
The purpose of this paper is to propose a mobile and social-based question-and-answer (Q&A) system that analyzes users' social relationships and past answering behavior, considers…
Abstract
Purpose
The purpose of this paper is to propose a mobile and social-based question-and-answer (Q&A) system that analyzes users' social relationships and past answering behavior, considers users' interest similarity and answer quality to infer suitable respondents and forwards the questions to users that are willing to give high quality answers.
Design/methodology/approach
This research applies first-order logic (FOL) inference calculation to generate question/interest ID that combines a users' social information, interests and social network intimacy to choose the nodes that can provide high-quality answers. After receiving a question, a friend can answer it, forward it to their friends according to the number of TTL (Time-to-Live) hops, or send the answer directly to the server. This research collected data from the TripAdvisor.com website and uses it for the experiment. The authors also collected previously answered questions from TripAdvisor.com; thus, subsequent answers could be forwarded to a centralized server to improve the overall performance.
Findings
The authors have first noticed that even though the proposed system is decentralized, it can still accurately identify the appropriate respondents to provide high-quality answers. In addition, since this system can easily identify the best answerers, there is no need to implement broadcasting, thus reducing the overall execution time and network bandwidth required. Moreover, this system allows users to accurately and quickly obtain high-quality answers after comparing and calculating interest IDs. The system also encourages frequent communication and interaction among users. Lastly, the experiments demonstrate that this system achieves high accuracy, high recall rate, low overhead, low forwarding cost and low response rate in all scenarios.
Originality/value
This paper proposes a mobile and social-based Q&A system that applies FOL inference calculation to analyze users' social relationships and past answering behavior, considers users' interest similarity and answer quality to infer suitable respondents and forwards the questions to users that are willing to give high quality answers. The experiments demonstrate that this system achieves high accuracy, high recall rate, low overhead, low forwarding cost and low response rate in all scenarios.
Details
Keywords
Multiple-choice questions (MCQs) and essays and short answer questions are the most common assessment protocols instructors use in their classrooms. However, the reliability and…
Abstract
Purpose
Multiple-choice questions (MCQs) and essays and short answer questions are the most common assessment protocols instructors use in their classrooms. However, the reliability and validity of these assessment protocols are controversial. The current study employed a survey research design using Qualtrics to determine the faculty and student perspective on using MCQs and essay and short answer questions in their courses as well as their rationale for the preference.
Design/methodology/approach
Eighty-five students and 67 faculty within the social sciences discipline participated in the study.
Findings
65% of the students strongly preferred MCQs over essays and short answer questions. However, faculty did not show a strong preference for one or the other form of assessment (52.30% selected essays and short answer questions, and 47.69% preferred MCQs) in their courses. The study also explores why the students and faculty prefer one form of assessment over the other.
Research limitations/implications
The findings of this study helped to understand the current assessment practices in a classroom from a faculty and student perspective.
Originality/value
This is one of few studies that evaluated the faculty as well as student perspective on the use of MCQs and essays and short answer questions in the curriculum across the social science discipline.
Details
Keywords
Alton Y.K Chua and Snehasish Banerjee
The purpose of this paper is to investigate the ways in which effectiveness of answers in Yahoo! Answers, one of the largest community question answering sites (CQAs), is related…
Abstract
Purpose
The purpose of this paper is to investigate the ways in which effectiveness of answers in Yahoo! Answers, one of the largest community question answering sites (CQAs), is related to question types and answerer reputation. Effective answers are defined as those that are detailed, readable, superior in quality and contributed promptly. Five question types that were studied include factoid, list, definition, complex interactive and opinion. Answerer reputation refers to the past track record of answerers in the community.
Design/methodology/approach
The data set comprises 1,459 answers posted in Yahoo! Answers in response to 464 questions that were distributed across the five question types. The analysis was done using factorial analysis of variance.
Findings
The results indicate that factoid, definition and opinion questions are comparable in attracting high quality as well as readable answers. Although reputed answerers generally fared better in offering detailed and high-quality answers, novices were found to submit more readable responses. Moreover, novices were more prompt in answering factoid, list and definition questions.
Originality/value
By analysing variations in answer effectiveness with a twin focus on question types and answerer reputation, this study explores a strand of CQA research that has hitherto received limited attention. The findings offer insights to users and designers of CQAs.
Details
Keywords
Briefly reviews previous literature by the author before presenting an original 12 step system integration protocol designed to ensure the success of companies or countries in…
Abstract
Briefly reviews previous literature by the author before presenting an original 12 step system integration protocol designed to ensure the success of companies or countries in their efforts to develop and market new products. Looks at the issues from different strategic levels such as corporate, international, military and economic. Presents 31 case studies, including the success of Japan in microchips to the failure of Xerox to sell its invention of the Alto personal computer 3 years before Apple: from the success in DNA and Superconductor research to the success of Sunbeam in inventing and marketing food processors: and from the daring invention and production of atomic energy for survival to the successes of sewing machine inventor Howe in co‐operating on patents to compete in markets. Includes 306 questions and answers in order to qualify concepts introduced.
Details