Search results
1 – 10 of over 1000Purpose: Because past research has investigated nonverbal behaviors in clusters, it is unclear what status value is ascribed to individual nonverbal behaviors. I test status cues…
Abstract
Purpose: Because past research has investigated nonverbal behaviors in clusters, it is unclear what status value is ascribed to individual nonverbal behaviors. I test status cues theory to investigate whether response latency functions as a status cue. I explore whether it affects behavioral influence or if it only signals assertiveness and does not have status value. I also explore how one's interpretation of response latency impacts behavioral influence.
Methodology: In a two-condition laboratory experiment, I isolate response latency and test its strength independently, and then I measure behavioral influence, participants' response latency, and perceptions of assertiveness. I also conduct interviews to investigate how participants interpret their partner's response latency to understand how people ascribe different meanings to the same nonverbal behavior.
Findings: I find that response latency alone does not affect behavioral influence, in part because how people interpret it varies. However, response latency does significantly impact participants' own response latency and their perceptions of their partner's assertiveness.
Practical Implications: This research demonstrates the intricacies of nonverbal behavior and status. More specifically, this work underscores important conceptual differences between assertiveness and status, and demonstrates how the interpretation of nonverbal behavior can impact behavioral influence.
Details
Keywords
Masoud Nosrati and Ronak Karimi
This paper aims to provide a method for media resource allocation in Cloud systems for supporting green computing policies, as well as attempting to improve the overall…
Abstract
Purpose
This paper aims to provide a method for media resource allocation in Cloud systems for supporting green computing policies, as well as attempting to improve the overall performance of system by optimizing the communication latencies.
Design/methodology/approach
A common method for resource allocation is using resource agent that takes the budgets/prices of applicants/resources and creates a probability matrix of allocation according to the policies of system. Two general policies for optimization are latency optimization and green computing. Presented heuristic for latencies is so that the average latencies of communication between applicant and resource are measured, and they will affect the next decision. For gaining green computing, it is attempted to consolidate the allocated resources on smaller number of physical machines. So calculation formula of the price of each resource is modified to decrease the probability of allocating the resources on the machine with least allocated resources.
Findings
Results of proposed method indicates its success in both green computing and improving the performance. Experiments show decreasing 21.4 per cent of response time simultaneously with increasing tasks in the tested range. The maximum and minimum of saved energy is acceptable and reported as 79.2 and 16.8 per cent.
Research limitations/implications
Like other centralized solutions, the proposed method suffers from the limitations of centralized resource agent, like bottle neck. But the implementation of distributed resource agent is postponed to future work.
Originality/value
Proposed method presents heuristics for improving the performance and gaining green computing. The key feature is formulating all the details and considering pitch variables for controlling the policies of system.
Details
Keywords
M. Angulakshmi, M. Deepa, M. Vanitha, R. Mangayarkarasi and I. Nagarajan
In this study, we discuss three DTN routing protocols, these are epidemic, PRoPHET and spray and wait routing protocols. A special simulator will be used; that is opportunistic…
Abstract
Purpose
In this study, we discuss three DTN routing protocols, these are epidemic, PRoPHET and spray and wait routing protocols. A special simulator will be used; that is opportunistic network environment (ONE) to create a network environment. Spray and wait has highest delivery rate and low latency in most of the cases. Hence, spray and wait have better performance than others. This analysis of the performance of DTN protocols helps the researcher to learn better of these protocols in the different environment.
Design/methodology/approach
Delay-Tolerant Network (DTN) is a network designed to operate effectively over extreme distances, such as those encountered in space communications or on an interplanetary scale. In such an environment, nodes are occasional communication and are available among hubs, and determinations of the next node communications are not confirmed. In such network environment, the packet can be transferred by searching current efficient route available for a particular node. Due to the uncertainty of packet transfer route, DTN is affected by a variety of factors such as packet size, communication cost, node activity, etc.
Findings
Spray and wait have highest delivery rate and low latency in most of the cases. Hence, spray and wait have better performance than others.
Originality/value
The primary goal of the paper is to extend these works in an attempt to offer a better understanding of the behavior of different DTN routing protocols with delivery probability, latency and overhead ratio that depend on various amounts of network parameters such as buffer size, number of nodes, movement ratio, time to live, movement range, transmission range and message generation rate. In this study, we discuss three DTN routing protocols: these are epidemic, PRoPHET and spray and wait routing protocols. A special simulator will be used; that is opportunistic network environment (ONE) to create a network environment. Spray and wait have highest delivery rate and low latency in most of the cases. Hence, spray and wait have better performance than others. This analysis of the performance of DTN protocols helps the researcher to learn better of these protocols in the different environment.
Details
Keywords
Masoud Nosrati and Mahmood Fazlali
One of the techniques for improving the performance of distributed systems is data replication, wherein new replicas are created to provide more accessibility, fault tolerance and…
Abstract
Purpose
One of the techniques for improving the performance of distributed systems is data replication, wherein new replicas are created to provide more accessibility, fault tolerance and lower access cost of the data. In this paper, the authors propose a community-based solution for the management of data replication, based on the graph model of communication latency between computing and storage nodes. Communities are the clusters of nodes that the communication latency between the nodes are minimum values. The purpose of this study if to, by using this method, minimize the latency and access cost of the data.
Design/methodology/approach
This paper used the Louvain algorithm for finding the best communities. In the proposed algorithm, by requesting a file according to the nodes of each community, the cost of accessing the file located out of the applicant’s community was calculated and the results were accumulated. On exceeding the accumulated costs from a specified threshold, a new replica of the file was created in the applicant’s community. Besides, the number of replicas of each file should be limited to prevent the system from creating useless and redundant data.
Findings
To evaluate the method, four metrics were introduced and measured, including communication latency, response time, data access cost and data redundancy. The results indicated acceptable improvement in all of them.
Originality/value
So far, this is the first research that aims at managing the replicas via community detection algorithms. It opens many opportunities for further studies in this area.
Details
Keywords
The use of mobile wireless data services continues to increase worldwide. New fourth‐generation (4G) wireless networks can deliver data rates exceeding 2 Mbps. The purpose of this…
Abstract
Purpose
The use of mobile wireless data services continues to increase worldwide. New fourth‐generation (4G) wireless networks can deliver data rates exceeding 2 Mbps. The purpose of this paper is to develop a framework of 4G mobile applications that utilize such high data rates and run on small form‐factor devices.
Design/methodology/approach
The author reviews existing literature of mobile applications development and proposes using network‐related characteristics to create a conceptual framework of these applications.
Findings
Combining traffic symmetry and latency yields a 2×3 framework with six categories that characterize current and emerging 4G mobile applications, such as augmented reality, mobile social networking and m‐health.
Research limitations/implications
With the advent of high‐speed 4G networks, completely new mobile applications can be developed to leverage such high data rates, and a framework of such development efforts is highly desirable.
Originality/value
The framework is developed based on a perspective of technical characteristics because these characteristics intrinsically constrain the kinds of broadband mobile applications that can be developed. The framework should be useful in exploring opportunities of mobile application development and guiding future research in this area.
Details
Keywords
Sadat Riyaz and Vijay Kumar Sharma
This paper aims to propose the reversible Feynman and double Feynman gates using quantum-dot cellular automata (QCA) nanotechnology with minimum QCA cells and latency which…
Abstract
Purpose
This paper aims to propose the reversible Feynman and double Feynman gates using quantum-dot cellular automata (QCA) nanotechnology with minimum QCA cells and latency which minimizes the circuit area with the more energy efficiency.
Design/methodology/approach
The core aim of the QCA nanotechnology is to build the high-speed, energy efficient and as much smaller devices as possible. This brings a challenge for the designers to construct the designs that fulfill the requirements as demanded. This paper proposed a new exclusive-OR (XOR) gate which is then used to implement the logical operations of the reversible Feynman and double Feynman gates using QCA nanotechnology.
Findings
QCA designer-E has been used for the QCA designs and the simulation results. The proposed QCA designs have less latency, occupy less area and have lesser cell count as compared to the existing ones.
Originality/value
The latencies of the proposed gates are 0.25 which are improved by 50% as compared to the best available design as reported in the literature. The cell count in the proposed XOR gate is 11, while it is 14 in Feynman gate and 27 in double Feynman gate. The cell count for the proposed designs is minimum as compared to the best available designs.
Details
Keywords
Ing-Chau Chang, Ciou-Song Lu and Sheng-Chih Wang
In the past, by adopting the handover prediction concept of the fast mobile IPv6, the authors have proposed a cross-layer architecture, which was called the proactive fast HCoP-B…
Abstract
Purpose
In the past, by adopting the handover prediction concept of the fast mobile IPv6, the authors have proposed a cross-layer architecture, which was called the proactive fast HCoP-B (FHCoP-B), to trigger layer 3 HCoP-B route optimization flow by 802.11 and 802.16 link events before the actual layer 2 handover of a mobile subnet in the nested mobile network (NEMO) occurs. In this way, proactive FHCoP-B has shortened its handover latency and packet loss. However, there are two scenarios where proactive FHCoP-B cannot normally complete its operations due to fast movements of the NEMO during handover. The paper aims to discuss these issues.
Design/methodology/approach
In this paper, the authors will propose efficient reactive FHCoP-B flows for these two scenarios to support fast and seamless handovers. The authors will further extend the analytical model proposed for mobile IPv6 to investigate four performance metrics of proactive and reactive FHCoP-B, HCoP-B and two well-known NEMO schemes with the radio link protocol (RLP), which can detect packet losses and performs retransmissions over the error-prone wireless link.
Findings
Through intensive simulations, the authors conclude that FHCoP-B outperforms HCoP-B and the other two well-known NEMO schemes by achieving the shortest handover latencies, the smallest number of packet losses and the fewest playback interruption time during handover only with few extra buffer spaces, even over error-prone wireless links of the nested NEMO.
Originality/value
This paper has three major contributions, which are rare in the NEMO literature. First, the proactive FHCoP-B has been enhanced as the reactive one to handle two fast handover scenarios with RLP for the nested NEMO. Second, the reactive FHCoP-B supports seamless reactive handover for the nested NEMO over error-prone wireless links. Third, mathematical performance analyses for two scenarios of reactive FHCoP-B with RLP over error-prone wireless links have been conducted.
Details
Keywords
Seyyed Javad Seyyed Mahdavi Chabok and Seyed Amin Alavi
The routing algorithm is one of the most important components in designing a network-on-chip (NoC). An effective routing algorithm can cause better performance and throughput, and…
Abstract
Purpose
The routing algorithm is one of the most important components in designing a network-on-chip (NoC). An effective routing algorithm can cause better performance and throughput, and thus, have less latency, lower power consumption and high reliability. Considering the high scalability in networks and fault occurrence on links, the more the packet reaches the destination (i.e. to cross the number of fewer links), the less the loss of packets and information would be. Accordingly, the proposed algorithm is based on reducing the number of passed links to reach the destination.
Design/methodology/approach
This paper presents a high-performance NoC that increases telecommunication network reliability by passing fewer links to destination. A large NoC is divided into small districts with central routers. In such a system, routing in large routes is performed through these central routers district by district.
Findings
By reducing the number of links, the number of routers also decreases. As a result, the power consumption is reduced, the performance of the NoC is improved, and the probability of collision with a faulty link and network latency is decreased.
Originality/value
The simulation is performed using the Noxim simulator because of its ability to manage and inject faults. The proposed algorithm, XY routing, as a conventional algorithm for the NoC, was simulated in a 14 × 14 network size, as the typical network size in the recent works.
Details
Keywords
Roger van Rensburg, Bruce Mellado and Cesar Augusto Marin Tobon
The purpose of this study is to locally develop low-cost wireless mesh networks for reliable data communications to devices that prevent the theft of these devices in learning…
Abstract
Purpose
The purpose of this study is to locally develop low-cost wireless mesh networks for reliable data communications to devices that prevent the theft of these devices in learning institutions of South Africa.
Design/methodology/approach
A network test-bench was developed where millions of packets were transmitted and logged between interconnected nodes to analyze the quality of the network’s service in a harsh indoor building environment. Similar methodologies in “big data” analysis as found in particle physics were adopted to analyze the network’s performance and reliability.
Findings
The results from statistical analysis reveal the quality of service between multiple asynchronous transmitting nodes in the network and compared with the wireless technology routing protocol to assess coverage in large geographical areas. The mesh network provides stable data communications between nodes with the exception of reliability degradation in some multi-hopping routes. Conclusions are presented to determine whether the underlining mesh network technology will be deployed to protect devices against theft in educational institutions of South Africa.
Research limitations/implications
The anti-theft application will focus on proprietary firmware development with a reputable tablet manufacturer to render the device inoperable. Data communications of devices to the network will be monitored and controlled from a central management system. The electronics embedding the system-on-chip will be redesigned and developed using the guidelines stipulated by the chip manufacturer.
Originality/value
Design and development of low-cost wireless mesh networks to protect tablets against theft in institutions of digitized learning. The work presents performance and reliability metrics of a low-power wireless mesh wireless technology developed in a harsh indoor building environment.
Details
Keywords
A.K. Oudjida, S. Titr and M. Hamarlain
The emergence of the systolic paradigm in 1978 inspired the first 2D‐array parallelization of the sequential matrix multiplication algorithm. Since then, and due to its attractive…
Abstract
The emergence of the systolic paradigm in 1978 inspired the first 2D‐array parallelization of the sequential matrix multiplication algorithm. Since then, and due to its attractive and appealing features, systolic approach has been gaining great momentum to the point where all 2D‐array parallelization attempts were exclusively systolic. As good result, latency has been successively reduced a number of times (5N, 3N, 2N, 3N/2), where N is the matrix size. But as latency was getting lower, further irregularities were introduced into the array, making the implementation severely compromised either at VLSI level or at system level. The best illustrative case of such irregularities are the two designs proposed by Tsay and Chang in 1995 and considered as the fastest designs (3N/2) that have been developed so far. The purpose of this paper is twofold: we first demonstrate that N+√N/2 is the minimal latency that can be achieved using the systolic approach. Afterwards, we introduce a full‐parallel 2D‐array algorithm with N latency and 2N I/O‐bandwidth. This novel algorithm is not only the fastest algorithm, but is also the most regular one too. A 3D parallel version with O(log N) latency is also presented.
Details