Search results
1 – 10 of 259Tin Nok Leung, Yin Ming Hui, Canon K.L. Luk, Dickson K.W. Chiu and Kevin K.W. Ho
This study analyses the advantages and weaknesses of using Facebook to aid the learning of Japanese as a foreign language.
Abstract
Purpose
This study analyses the advantages and weaknesses of using Facebook to aid the learning of Japanese as a foreign language.
Design/methodology/approach
A questionnaire survey was conducted to collect data from 100 Hong Kong Japanese language learners (who are generally fluent in Chinese and English), ranging from total amateur to advanced learners (Japanese Language Proficiency Test (JLPT) qualified at different levels).
Findings
The authors' results suggest that the advantages of using Facebook to help learn Japanese include: (1) serving as a free-of-charge, casual, and convenient learning platform; (2) enriching learners' knowledge beyond the language learning and (3) encouraging interactive and collaborative learning with other users for practicing the language. However, the low credibility and unstructured educational materials posted on Facebook and being easily distracted by other Facebook feeds are the major weaknesses of learning a language through Facebook. Furthermore, the authors' result shows that Facebook is especially effective for Japanese learning when learners fall into either one of the following groups: young, female, or intermediate (N2/3) learners.
Originality/value
Scant studies focus on the aid of learning Japanese via Facebook, especially Hong Kong learners' perceptions, or generally in the East. Therefore, this study aims to fill this research gap. The authors' findings will facilitate the students, teachers, and language institutions from Hong Kong and other countries to improve the students' effectiveness in learning and teaching Japanese.
Details
Keywords
This study aims to collect distributed knowledge organization systems (KOSs) from various domains, enrich each with meta information and link them to the multilingual KOS…
Abstract
Purpose
This study aims to collect distributed knowledge organization systems (KOSs) from various domains, enrich each with meta information and link them to the multilingual KOS registry, facilitating integrated search alongside KOSs from various languages and regions.
Design/methodology/approach
This research involved collecting and organizing KOS information through three primary steps. The initial phase involved finding KOSs from Web search results, supplemented by the Korea ON-line E-Procurement System (KONEPS) and the National R&D Integrated Notification Service. After obtaining these KOSs, they were enriched by structuring contextual meta information using Basic Register of Thesauri, Ontologies and Classification (BARTOC) metadata elements and established dedicated media wiki pages for each. Finally, the KOSs were linked to the multilingual KOS registry, BARTOC, ensuring seamless integration with KOSs from various languages and regions and creating connections between each registry entry and its associated KOS wiki page.
Findings
The research findings revealed several insights, as follows: (1) importance of a stable source for collecting KOS: no national body currently oversees KOS registration, underscoring the need for a systematic approach to collect dispersed KOSs. For Korean KOSs (K-KOSs), KONEPS and National R&D Integrated Notification Service are effective data sources. (2) Importance of enhanced metadata: merely collecting KOSs were not enough. Enhanced metadata bridges access gaps and dedicated wiki pages aid user identification and understanding. (3) Observations from multilingual registry uploads: When adding KOSs to a multilingual registry, similarities were observed across languages and regions. Recognizing this, the K-KOSs were linked with their international counterparts, fostering potential global collaboration.
Research limitations/implications
Due to the absence of a dedicated KOS registry agency, the study might have missed KOSs from certain fields or potentially over-collected from others. Furthermore, this study primarily focused on K-KOSs and their integration into the BARTOC registry, which might influence the methods and perspectives on collecting and establishing links among analogous KOSs in the registry.
Originality/value
This research pursued a stable method to detect KOS development and revisions across various fields. To facilitate this, we used the integrated e-procurement and R&D notification system and added meta information to aid in the identification and understanding of KOSs, which includes media wiki pages. Furthermore, link information was provided between the BARTOC registry and the Korean KOS websites and media wiki pages.
Details
Keywords
Paul Di Gangi, Robin Teigland and Zeynep Yetis
This research investigates how the value creation interests and activities of different stakeholder groups within one open source software (OSS) project influence the project's…
Abstract
Purpose
This research investigates how the value creation interests and activities of different stakeholder groups within one open source software (OSS) project influence the project's development over time.
Design/methodology/approach
The authors conducted a case study of OpenSimulator using textual and thematic analyses of the initial four years of OpenSimulator developer mailing list to identify each stakeholder group and guide our analysis of their interests and value creation activities over time.
Findings
The analysis revealed that while each stakeholder group was active within the OSS project's development, the different groups possessed complementary interests that enabled the project to evolve. In the formative period, entrepreneurs were interested in the software's strategic direction in the market, academics and SMEs in software functionality and large firms and hobbyists in software testing. Each group retained its primary interest in the maturing period with academics and SMEs separating into server- and client-side usability. The analysis shed light on how the different stakeholder groups overcame tensions amongst themselves and took specific actions to sustain the project.
Originality/value
The authors extend stakeholder theory by reconceptualizing the focal organization and its stakeholders for OSS projects. To date, OSS research has primarily focused on examining one project relative to its marketplace. Using stakeholder theory, we identified stakeholder groups within a single OSS project to demonstrate their distinct interests and how these interests influence their value creation activities over time. Collectively, these interests enable the project's long-term development.
Details
Keywords
Kerstin Sahlin and Ulla Eriksson-Zetterquist
Over the past few decades, university reforms in line with management and enterprise ideals have been well documented. Changes in the ideals underlying the missions of…
Abstract
Over the past few decades, university reforms in line with management and enterprise ideals have been well documented. Changes in the ideals underlying the missions of universities have led to changes in their modes of governing and organizing, which in turn drive further transformation of their missions. One set of reforms in Swedish higher education has been the dissolution of collegial bodies and procedures. At the same time, in recent years, we have witnessed an increased interest in collegiality and a reintroduction of collegial bodies and procedures. New translations of collegiality appear not only in how universities are organized, but also in other core aspects of research and higher education. We review examples of peer reviewing, research assessment, and direct recruitment of professors and ask: Can these new translations of collegiality be understood as a revitalization of collegiality, or is it – to draw a parallel with greenwashing – rather a matter of collegiality-washing?
Details
Keywords
Vaclav Snasel, Tran Khanh Dang, Josef Kueng and Lingping Kong
This paper aims to review in-memory computing (IMC) for machine learning (ML) applications from history, architectures and options aspects. In this review, the authors investigate…
Abstract
Purpose
This paper aims to review in-memory computing (IMC) for machine learning (ML) applications from history, architectures and options aspects. In this review, the authors investigate different architectural aspects and collect and provide our comparative evaluations.
Design/methodology/approach
Collecting over 40 IMC papers related to hardware design and optimization techniques of recent years, then classify them into three optimization option categories: optimization through graphic processing unit (GPU), optimization through reduced precision and optimization through hardware accelerator. Then, the authors brief those techniques in aspects such as what kind of data set it applied, how it is designed and what is the contribution of this design.
Findings
ML algorithms are potent tools accommodated on IMC architecture. Although general-purpose hardware (central processing units and GPUs) can supply explicit solutions, their energy efficiencies have limitations because of their excessive flexibility support. On the other hand, hardware accelerators (field programmable gate arrays and application-specific integrated circuits) win on the energy efficiency aspect, but individual accelerator often adapts exclusively to ax single ML approach (family). From a long hardware evolution perspective, hardware/software collaboration heterogeneity design from hybrid platforms is an option for the researcher.
Originality/value
IMC’s optimization enables high-speed processing, increases performance and analyzes massive volumes of data in real-time. This work reviews IMC and its evolution. Then, the authors categorize three optimization paths for the IMC architecture to improve performance metrics.
Details
Keywords
Campbell J. Thomson, Tania Tambiah and Mark B. M. Hochman
The creation of a Unified National System of Higher Education in Australia (https://en.wikipedia.org/wiki/Dawkins_Revolution) in the late 1980s resulted in many new universities…
Abstract
The creation of a Unified National System of Higher Education in Australia (https://en.wikipedia.org/wiki/Dawkins_Revolution) in the late 1980s resulted in many new universities and significantly increased research funding for the sector. The result was the emergence of the modern Research Management Office (RMO) and eventually the establishment of the Australian Research Management Society (ARMS) to support the development of research management professionals in the region; including Singapore, New Zealand, Pacific Islands, and Papua New Guinea. In 2013, ARMS launched an accreditation program to recognise and develop careers in research management. There are now more than 3,500 ARMS members with nearly 30% only having been in the profession for less than 5 years. The role of ARMS in helping Research Managers and Administrators (RMAs) redefine their roles and upskill is ever important in growing the profession and its leaders.
Details
Keywords
This chapter offers an overview of the applications of artificial intelligence (AI) in the textile industry and in particular, the textile colouration and finishing industry. The…
Abstract
This chapter offers an overview of the applications of artificial intelligence (AI) in the textile industry and in particular, the textile colouration and finishing industry. The advent of new technologies such as AI and the Internet of Things (IoT) has changed many businesses and one area AI is seeing growth in is the textile industry. It is estimated that the AI software market shall reach a new high of over US$60 billion by 2022, and the largest increase is projected to be in the area of machine learning (ML). This is the area of AI where machines process and analyse vast amount of data they collect to perform tasks and processes. In the textile manufacturing industry, AI is applied to various areas such as colour matching, colour recipe formulation, pattern recognition, garment manufacture, process optimisation, quality control and supply chain management for enhanced productivity, product quality and competitiveness, reduced environmental impact and overall improved customer experience. The importance and success of AI is set to grow as ML algorithms become more sophisticated and smarter, and computing power increases.
Details
Keywords
The paper addresses the issue of change in Wikidata ontology by exposing the role of the socio-epistemic processes that take place inside the infrastructure. The subject of the…
Abstract
Purpose
The paper addresses the issue of change in Wikidata ontology by exposing the role of the socio-epistemic processes that take place inside the infrastructure. The subject of the study was the process of extending the Wikidata ontology with a new property as an example of the interplay between the social and technical components of the Wikidata infrastructure.
Design/methodology/approach
In this study, an interpretative approach to the evolution of the Wikidata ontology was used. The interpretation framework was a process-centric approach to changes in the Wikidata ontology. The extension of the Wikidata ontology with a new property was considered a socio-epistemic process where multiple agents interact for epistemic purposes. The decomposition of this process into three stages (initiation, knowledge work and closure) allowed us to reveal the role of the institutional structure of Wikidata in the evolution of its ontology.
Findings
This study has shown that the modification of the Wikidata ontology is an institutionalized process where community-accepted regulations and practices must be applied. These regulations come from the institutional structure of the Wikidata community, which sets the normative patterns for both the process and social roles and responsibilities of the involved agents.
Originality/value
The results of this study enhance our understanding of the evolution of the collaboratively developed Wikidata ontology by exposing the role of socio-epistemic processes, division of labor and normative patterns.
Details
Keywords
Peter Dornheim and Ruediger Zarnekow
The human factor is the most important defense asset against cyberattacks. To ensure that the human factor stays strong, a cybersecurity culture must be established and cultivated…
Abstract
Purpose
The human factor is the most important defense asset against cyberattacks. To ensure that the human factor stays strong, a cybersecurity culture must be established and cultivated in a company to guide the attitudes and behaviors of employees. Many cybersecurity culture frameworks exist; however, their practical application is difficult. This paper aims to demonstrate how an established framework can be applied to determine and improve the cybersecurity culture of a company.
Design/methodology/approach
Two surveys were conducted within eight months in the internal IT department of a global software company to analyze the cybersecurity culture and the applied improvement measures. Both surveys comprised the same 23 questions to measure cybersecurity culture according to six dimensions: cybersecurity accountability, cybersecurity commitment, cybersecurity necessity and importance, cybersecurity policy effectiveness, information usage perception and management buy-in.
Findings
Results demonstrate that cybersecurity culture maturity can be determined and improved if accurate measures are derived from the results of the survey. The first survey showed potential for improving the dimensions of cybersecurity accountability, cybersecurity commitment and cybersecurity policy effectiveness, while the second survey proved that these dimensions have been improved.
Originality/value
This paper proves that practical application of cybersecurity culture frameworks is possible if they are appropriately tailored to a given organization. In this regard, scientific research and practical application combine to offer real value to researchers and cybersecurity executives.
Details
Keywords
Min Zuo, Jiangnan Qiu and Jingxian Wang
Online collaboration in today's world is a topic of genuine interest to Internet researchers. The purpose of this paper is to explore the role of group knowledge heterogeneity…
Abstract
Purpose
Online collaboration in today's world is a topic of genuine interest to Internet researchers. The purpose of this paper is to explore the role of group knowledge heterogeneity (GKH) in open collaboration performance using the mediating mechanisms of group cognition (GC) and interaction to understand the determinants of the success of online open collaboration platforms.
Design/methodology/approach
Study findings are based on partial least squares structural equation modeling (PLS-SEM), the formal mediation test and moderating effect analysis from Wikipedia's 160 online open collaborative groups.
Findings
For online knowledge heterogeneous groups, open collaboration performance is mediated by both GC and collaborative interaction (COL). The mediating role of GC is weak, while the mediating role of COL is strengthened when knowledge complexity (KC) is higher. By dividing group interaction into COL and communicative interaction (COM), the authors also observed that COL is effective for online open collaboration, whereas COM is limited.
Originality/value
These findings suggest that for more heterogeneous large groups, group interaction would explain more variance in performance than GC, offering an in-depth understanding of the relationship between group heterogeneity and open collaboration performance, answering what determines the success of online open collaboration platforms as well as explaining the inconsistency in prior findings. In addition, this study expands the application of Interactive Team Cognition (ITC) theory to the online open collaboration context.
Details