Search results
1 – 10 of over 8000The purpose of this paper is to investigate the extent to which the Thai Universal Healthcare Insurance Coverage Scheme (UC) has contributed to villagers’ well-being in the…
Abstract
Purpose
The purpose of this paper is to investigate the extent to which the Thai Universal Healthcare Insurance Coverage Scheme (UC) has contributed to villagers’ well-being in the northeast of Thailand. Public opinion polls specifically advocate the schemes are used to justify its ongoing political support. However, the question still remains as to whether it has made a difference in the lives of poorer rural people.
Design/methodology/approach
A multi-methods approach and a well-being focused evaluation (WFE) approach are used to understand villagers’ experiences of having and using the scheme and investigate the villagers’ satisfaction with the scheme and how this satisfaction has contributed to their life as a whole.
Findings
It is found that the scheme had made a valuable contribution to improving perceived well-being amongst villagers. Apart from the direct benefits of having that healthcare when needed, there is also the indirect benefit of increasing villagers’ sense of security that healthcare will be accessible if required.
Research limitations/implications
There are still pertinent issues for policy consideration; for example, almost 31 per cent of the villagers with the card have never used it. Approximately 22 per cent of people using the card reported dissatisfactions. Although healthcare direct-costs were now more affordable, a range of opportunity costs, geographic, social, cultural and other factors still need to be factored into further policy and service development to make the scheme more equitable and effective.
Originality/value
The study proposes “WFE”, a new evaluation approach. WFE may also be applied to other forms of social policy particularly concerning the impact of its policy on people's well-being.
Details
Keywords
Yu‐Wei Chan, Chih‐Han Lai and Yeh‐Ching Chung
Peer‐to‐peer (P2P) streaming quickly emerges as an important application over the internet. A lot of systems have been implemented to support peer‐to‐peer media streaming…
Abstract
Purpose
Peer‐to‐peer (P2P) streaming quickly emerges as an important application over the internet. A lot of systems have been implemented to support peer‐to‐peer media streaming. However, some problems still exist. These problems include non‐guaranteed communication efficiency, limited upload capacity and dynamics of suppliers which are all related to the overlay topology design. The purpose of this paper is to propose a novel overlay construction framework for peer‐to‐peer streaming.
Design/methodology/approach
To exploit the bandwidth resource of neighboring peers with low communication delay, application of the grouping method was proposed to construct a flexible two‐layered locality‐aware overlay network. In the proposed overlay, peers are clustered into locality groups according to the communication delays of peers. These locality groups are interconnected with each other to form the top layer of the overlay. In each locality group, peers form an overlay mesh for transmitting stream to other peers of the same group. These overlay meshes form the bottom layer of the overlay.
Findings
Through simulations, the performance was compared in terms of communication efficiency, source‐to‐end delivery efficiency and reliability of the delivery paths of the proposed solution currently. Simulation results show that the proposed method can achieve the construction of a scalable, efficient and stable peer‐to‐peer streaming environment.
Originality/value
The new contributions in this paper are a novel framework which includes the adaptability, maintenance and optimization schemes to adjust the size of overlay dynamically according to the dynamics of peers; and considering the importance of locality of peers in the system.
Details
Keywords
The purpose of this study is to present a newly proposed and developed sorting algorithm-based merging weighted fraction Monte Carlo (SAMWFMC) method for solving the population…
Abstract
Purpose
The purpose of this study is to present a newly proposed and developed sorting algorithm-based merging weighted fraction Monte Carlo (SAMWFMC) method for solving the population balance equation for the weighted fraction coagulation process in aerosol dynamics with high computational accuracy and efficiency.
Design/methodology/approach
In the new SAMWFMC method, the jump Markov process is constructed as the weighted fraction Monte Carlo (WFMC) method (Jiang and Chan, 2021) with a fraction function. Both adjustable and constant fraction functions are used to validate the computational accuracy and efficiency. A new merging scheme is also proposed to ensure a constant-number and constant-volume scheme.
Findings
The new SAMWFMC method is fully validated by comparing with existing analytical solutions for six benchmark test cases. The numerical results obtained from the SAMWFMC method with both adjustable and constant fraction functions show excellent agreement with the analytical solutions and low stochastic errors. Compared with the WFMC method (Jiang and Chan, 2021), the SAMWFMC method can significantly reduce the stochastic error in the total particle number concentration without increasing the stochastic errors in high-order moments of the particle size distribution at only slightly higher computational cost.
Originality/value
The WFMC method (Jiang and Chan, 2021) has a stringent restriction on the fraction functions, making few fraction functions applicable to the WFMC method except for several specifically selected adjustable fraction functions, while the stochastic error in the total particle number concentration is considerably large. The newly developed SAMWFMC method shows significant improvement and advantage in dealing with weighted fraction coagulation process in aerosol dynamics and provides an excellent potential to deal with various fraction functions with higher computational accuracy and efficiency.
Details
Keywords
Jenq-Muh Hsu, Jui-Yang Chang and Chih-Hung Wang
Named Data Networking (NDN) is a content-centric network differing from the traditional IP-based network. It adopts the name prefix to identify, query and route the information…
Abstract
Purpose
Named Data Networking (NDN) is a content-centric network differing from the traditional IP-based network. It adopts the name prefix to identify, query and route the information content instead of IP-based addressing and routing. NDN provides a convenient way to access the content without knowing the originated location of the requested information. However, the length of name prefix varies. It is not like the fixed-length IP addresses that makes handling queries or searching the requested information in NDN easier. An efficient name lookup mechanism of name prefix will efficiently increase the performance of prefix identifying, name searching and content retrieving. Therefore, this paper aims to propose a partial name prefix merging and shortening scheme for enhancing the efficiency of name lookup in NDN.
Design/methodology/approach
To reduce the work involved in name prefix identifying, querying, storing and routing, this work adopts a cyclic redundancy check-based encoding scheme to shorten the variable length of the name prefix into a proper and fixed length of encoded numerical information. In fact, the structure of a name prefix is presented in a combination of word segments with the slash symbol. The shortening procedure of name prefix can also be applied to adjacent word segments forming fixed-length encoded data for further efficiently matching the name prefix for name lookup in NDN.
Findings
The experimental results show that the shorter length of encoded name prefix can effectively reduce the access time of name lookup and increasingly retrieve the corresponding named content in NDN. Through partial merging and shortening of name prefix, the length of encoded prefix name may be larger than the whole encoding of name prefix. It retains the information differences from different parts of various name prefixes. Thus, it can avoid collision problems with the same encoded information from various name prefixes.
Originality/value
From the experimental results, it is observed that partial merging and shortening of name prefix is useful for name look up in NDN. It can increase the efficiency of name prefix matching and retrieving in NDN. It can also save memory space to store the name prefix in an NDN node.
Details
Keywords
Vasileios Stamatis, Michail Salampasis and Konstantinos Diamantaras
In federated search, a query is sent simultaneously to multiple resources and each one of them returns a list of results. These lists are merged into a single list using the…
Abstract
Purpose
In federated search, a query is sent simultaneously to multiple resources and each one of them returns a list of results. These lists are merged into a single list using the results merging process. In this work, the authors apply machine learning methods for results merging in federated patent search. Even though several methods for results merging have been developed, none of them were tested on patent data nor considered several machine learning models. Thus, the authors experiment with state-of-the-art methods using patent data and they propose two new methods for results merging that use machine learning models.
Design/methodology/approach
The methods are based on a centralized index containing samples of documents from all the remote resources, and they implement machine learning models to estimate comparable scores for the documents retrieved by different resources. The authors examine the new methods in cooperative and uncooperative settings where document scores from the remote search engines are available and not, respectively. In uncooperative environments, they propose two methods for assigning document scores.
Findings
The effectiveness of the new results merging methods was measured against state-of-the-art models and found to be superior to them in many cases with significant improvements. The random forest model achieves the best results in comparison to all other models and presents new insights for the results merging problem.
Originality/value
In this article the authors prove that machine learning models can substitute other standard methods and models that used for results merging for many years. Our methods outperformed state-of-the-art estimation methods for results merging, and they proved that they are more effective for federated patent search.
Details
Keywords
Managers must make numerous strategic decisions in order to initiate and implement a business model innovation (BMI). This paper examines how managers perceive the management team…
Abstract
Purpose
Managers must make numerous strategic decisions in order to initiate and implement a business model innovation (BMI). This paper examines how managers perceive the management team interacts when making BMI decisions. The paper also investigates how group biases and board members’ risk willingness affect this process.
Design/methodology/approach
Empirical data were collected through 26 in-depth interviews with German managing directors from 13 companies in four industries (mobility, manufacturing, healthcare and energy) to explore three research questions: (1) What group effects are prevalent in BMI group decision-making? (2) What are the key characteristics of BMI group decisions? And (3) what are the potential relationships between BMI group decision-making and managers' risk willingness? A thematic analysis based on Gioia's guidelines was conducted to identify themes in the comprehensive dataset.
Findings
First, the results show four typical group biases in BMI group decisions: Groupthink, social influence, hidden profile and group polarization. Findings show that the hidden profile paradigm and groupthink theory are essential in the context of BMI decisions. Second, we developed a BMI decision matrix, including the following key characteristics of BMI group decision-making managerial cohesion, conflict readiness and information- and emotion-based decision behavior. Third, in contrast to previous literature, we found that individual risk aversion can improve the quality of BMI decisions.
Practical implications
This paper provides managers with an opportunity to become aware of group biases that may impede their strategic BMI decisions. Specifically, it points out that managers should consider the key cognitive constraints due to their interactions when making BMI decisions. This work also highlights the importance of risk-averse decision-makers on boards.
Originality/value
This qualitative study contributes to the literature on decision-making by revealing key cognitive group biases in strategic decision-making. This study also enriches the behavioral science research stream of the BMI literature by attributing a critical influence on the quality of BMI decisions to managers' group interactions. In addition, this article provides new perspectives on managers' risk aversion in strategic decision-making.
Details
Keywords
Chantola Kit, Toshiyuki Amagasa and Hiroyuki Kitagawa
The purpose of this paper is to propose efficient algorithms for structural grouping over Extensible Markup Language (XML) data, called TOPOLOGICAL ROLLUP (T‐ROLLUP), which are to…
Abstract
Purpose
The purpose of this paper is to propose efficient algorithms for structural grouping over Extensible Markup Language (XML) data, called TOPOLOGICAL ROLLUP (T‐ROLLUP), which are to compute aggregation functions based on XML data with multiple hierarchical levels. They play important roles in the online analytical processing of XML data, called XML‐OLAP, with which complex analysis over XML can be performed to discover valuable information from XML.
Design/methodology/approach
Several variations of algorithms are proposed for efficient T‐ROLLUP computation. First, two basic algorithms, top‐down algorithm (TDA) and bottom‐up algorithm (BUA), are presented in which the well‐known structural‐join algorithms are used. The paper then proposes more efficient algorithms, called single‐scan by preorder number and single‐scan by postorder number (SSC‐Pre/Post), which are also based on structural joins, but have been modified from the basic algorithms so that multiple levels of grouping are computed with a single scan over node lists. In addition, the paper attempts to adopt the algorithm for parallel execution in multi‐core environments.
Findings
Several experiments are conducted with XMark and synthetic XML data to show the effectiveness of the proposed algorithms. The experiments show that proposed algorithms perform much better than the naïve implementation. In particular, the proposed SSC‐Pre and SSC‐Post perform better than TDA and BUA for all cases. Beyond that, the experiment using the parallel single scan algorithm also shows better performance than the ordinary basic algorithm.
Research limitations/implications
This paper focuses on the T‐ROLLUP operation for XML data analysis. For this reason, other operations related to XML‐OLAP, such as CUBE, WINDOWING, and RANKING should also be investigated.
Originality/value
The paper presents an extended version of one of the award winning papers at iiWAS2008.
Details
Keywords
Koen Mondelaers and Guido Van Huylenbroeck
The purpose of this paper is to exemplify, by means of a Belgian case study, the transition of multiple certification schemes currently employed in the food sector towards a…
Abstract
Purpose
The purpose of this paper is to exemplify, by means of a Belgian case study, the transition of multiple certification schemes currently employed in the food sector towards a single retail driven higher end spot market.
Design/methodology/approach
Data were obtained by means of focus group sessions, a survey, in depth interviews and a literature review. The theoretical framework builds upon institutional economics, the competitive forces as identified by Porter, and the theory of system innovations. The article illustrates the current institutional setting of certification, the drive towards a premium spot market and the consequences for the participants in the schemes.
Findings
This paper illustrates that a shift towards a premium spot market is indeed apparent. The paper furthermore argues that the dynamics of certification schemes are characterized by processes of contraction (mergers) followed by relaxation (diversification). The paper concludes that the retail sector is the primary beneficiary of the shift towards a single premium spot market. For the remainder of the food chain members, it is less clear whether the overall effect is positive.
Originality/value
The question of multiple certification schemes merging into a single retail driven scheme is approached from different stakeholders' point of views. Furthermore, the different factors steering this transition are elucidated and empirically confirmed. Both elements make this paper a valuable contribution to the existing literature on certification and coordination mechanisms in the food chain.
Details
Keywords
ROY RADA, HAFEDH MILI, GARY LETOURNEAU and DOUG JOHNSTON
An indexing language is made more accessible to searchers and indexers by the presence of entry terms or near‐synonyms. This paper first presents an evaluation of existing entry…
Abstract
An indexing language is made more accessible to searchers and indexers by the presence of entry terms or near‐synonyms. This paper first presents an evaluation of existing entry terms and then presents and tests a strategy for creating entry terms. The key tools in the evaluation of the entry terms are documents already indexed into the Medical Subject Headings (MeSH) and an automatic indexer. If the automatic indexer can better map the title to the index terms with the use of entry terms than without entry terms, then the entry terms have helped. Sensitive assessment of the automatic indexer requires the introduction of measures of conceptual closeness between the computer and human output. With the tools described in this paper, one can systematically demonstrate that certain entry terms have ambiguous meanings. In the selection of new entry terms another controlled vocabulary or thesaurus, called the Systematized Nomenclature of Medicine (SNOMED), was consulted. An algorithm for mapping terms from SNOMED to MeSH was implemented and evaluated with the automatic indexer. The new SNOMED‐based entry terms did not help indexing but did show how new concepts might be identified which would constitute meaningful amendments to MeSH. Finally, an improved algorithm for combining two thesauri was applied to the Computing Reviews Classification Structure (CRCS) and MeSH. CRCS plus MeSH supported better indexing than did MeSH alone.
H. Kabir, Gholamali C. Shoja and Eric G. Manning
Streaming audio/video contents over the Internet requires large network bandwidth and timely delivery of media data. A streaming session is generally long and also needs a large…
Abstract
Streaming audio/video contents over the Internet requires large network bandwidth and timely delivery of media data. A streaming session is generally long and also needs a large I/O bandwidth at the streaming server. A streaming server, however, has limited network and I/O bandwidth. For this reason, a streaming server alone cannot scale a streaming service well. An entire audio/video media file often cannot be cached due to intellectual property right concerns of the content owners, security reasons, and also due to its large size. This makes a streaming service hard to scale using conventional proxy servers. Media file compression using variable‐bit‐rate (VBR) encoding is necessary to get constant quality video playback although it produces traffic bursts. Traffic bursts either waste network bandwidth or cause hiccups in the playback. Large network latency and jitter also cause long start‐up delay and unwanted pauses in the playback, respectively. In this paper, we propose a proxy based constant‐bit‐rate (CBR)‐transmission scheme for VBR‐encoded videos and a scalable streaming scheme that uses a CBRtransmission scheme to stream stored videos over the Internet. Our CBR‐streaming scheme allows a server to transmit a VBRencoded video at a constant bit rate, close to its mean encoding bit rate, and deals with the network latency and jitter issues efficiently in order to provide quick and hiccup free playback without caching an entire media file. Our scalable streaming scheme also allows many clients to share a server stream. We use prefix buffers at the proxy to cache the prefixes of popular videos, to minimize the start‐up delay and to enable near mean bit rate streaming from the server as well as from the proxy. We use smoothing buffers at the proxy not only to eliminate jitter and traffic burst effects but also to enable many clients to share the same server stream. We present simulation results to demonstrate the effectiveness of our streaming scheme.
Details