Search results
1 – 10 of 60Jinghan Du, Haiyan Chen and Weining Zhang
In large-scale monitoring systems, sensors in different locations are deployed to collect massive useful time-series data, which can help in real-time data analytics and its…
Abstract
Purpose
In large-scale monitoring systems, sensors in different locations are deployed to collect massive useful time-series data, which can help in real-time data analytics and its related applications. However, affected by hardware device itself, sensor nodes often fail to work, resulting in a common phenomenon that the collected data are incomplete. The purpose of this study is to predict and recover the missing data in sensor networks.
Design/methodology/approach
Considering the spatio-temporal correlation of large-scale sensor data, this paper proposes a data recover model in sensor networks based on a deep learning method, i.e. deep belief network (DBN). Specifically, when one sensor fails, the historical time-series data of its own and the real-time data from surrounding sensor nodes, which have high similarity with a failure observed using the proposed similarity filter, are collected first. Then, the high-level feature representation of these spatio-temporal correlation data is extracted by DBN. Moreover, to determine the structure of a DBN model, a reconstruction error-based algorithm is proposed. Finally, the missing data are predicted based on these features by a single-layer neural network.
Findings
This paper collects a noise data set from an airport monitoring system for experiments. Various comparative experiments show that the proposed algorithms are effective. The proposed data recovery model is compared with several other classical models, and the experimental results prove that the deep learning-based model can not only get a better prediction accuracy but also get a better performance in training time and model robustness.
Originality/value
A deep learning method is investigated in data recovery task, and it proved to be effective compared with other previous methods. This might provide a practical experience in the application of a deep learning method.
Details
Keywords
Kam C. Chan, Feida Zhang and Weining Zhang
The purpose of this paper is to study the relationship between institutional holdings and analyst coverage in the context of the heterogeneous nature of institutional investors.
Abstract
Purpose
The purpose of this paper is to study the relationship between institutional holdings and analyst coverage in the context of the heterogeneous nature of institutional investors.
Design/methodology/approach
Similar to prior studies (e.g. Ke and Ramalingegowda; Ramalingegowda and Yu), this paper obtains institutional investors' trading classifications (transient, dedicated, and quasi‐indexing) from Brian Bushee directly. To examine the hypotheses, the paper uses a two‐step instrumental variable approach demonstrated in O'Brien and Bhushan to mitigate the simultaneity relationship between the change in analyst coverage and the change in the number of heterogeneous institutional investors.
Findings
The findings suggest that such relations are different among transient, dedicated, and quasi‐indexing institutional investors. Specifically, there are three major results. First, a change to the number of analyst coverage has the lowest impact on the change in the number of dedicated institutional investors. Second, a change in the number of transient institutional investors has a higher impact on change in analyst coverage than those for change in the number of dedicated and quasi‐indexing institutional investors. Third, changes to analysts' buy or sell recommendations have the least impact on the change in the number of dedicated institutions, relative to transient and quasi‐indexing institutions.
Research limitations/implications
The findings suggest that institutional investors are not homogeneous. Research studies on institutional investors need to disentangle the differences among different types of institutions.
Originality/value
The paper provides a comprehensive study on different institutional investors and analyst coverage. The findings show the complex nature of the interaction between institutional investors and analyst coverage.
Details
Keywords
Wei Zhang, Xianghong Hua, Kegen Yu, Weining Qiu, Shoujian Zhang and Xiaoxing He
This paper aims to introduce the weighted squared Euclidean distance between points in signal space, to improve the performance of the Wi-Fi indoor positioning. Nowadays, the…
Abstract
Purpose
This paper aims to introduce the weighted squared Euclidean distance between points in signal space, to improve the performance of the Wi-Fi indoor positioning. Nowadays, the received signal strength-based Wi-Fi indoor positioning, a low-cost indoor positioning approach, has attracted a significant attention from both academia and industry.
Design/methodology/approach
The local principal gradient direction is introduced and used to define the weighting function and an average algorithm based on k-means algorithm is used to estimate the local principal gradient direction of each access point. Then, correlation distance is used in the new method to find the k nearest calibration points. The weighted squared Euclidean distance between the nearest calibration point and target point is calculated and used to estimate the position of target point.
Findings
Experiments are conducted and the results indicate that the proposed Wi-Fi indoor positioning approach considerably outperforms the weighted k nearest neighbor method. The new method also outperforms support vector regression and extreme learning machine algorithms in the absence of sufficient fingerprints.
Research limitations/implications
Weighted k nearest neighbor approach, support vector regression algorithm and extreme learning machine algorithm are the three classic strategies for location determination using Wi-Fi fingerprinting. However, weighted k nearest neighbor suffers from dramatic performance degradation in the presence of multipath signal attenuation and environmental changes. More fingerprints are required for support vector regression algorithm to ensure the desirable performance; and labeling Wi-Fi fingerprints is labor-intensive. The performance of extreme learning machine algorithm may not be stable.
Practical implications
The new weighted squared Euclidean distance-based Wi-Fi indoor positioning strategy can improve the performance of Wi-Fi indoor positioning system.
Social implications
The received signal strength-based effective Wi-Fi indoor positioning system can substitute for global positioning system that does not work indoors. This effective and low-cost positioning approach would be promising for many indoor-based location services.
Originality/value
A novel Wi-Fi indoor positioning strategy based on the weighted squared Euclidean distance is proposed in this paper to improve the performance of the Wi-Fi indoor positioning, and the local principal gradient direction is introduced and used to define the weighting function.
Details
Keywords
Wei Zhang, Xianghong Hua, Kegen Yu, Weining Qiu, Xin Chang, Bang Wu and Xijiang Chen
Nowadays, WiFi indoor positioning based on received signal strength (RSS) becomes a research hotspot due to its low cost and ease of deployment characteristics. To further improve…
Abstract
Purpose
Nowadays, WiFi indoor positioning based on received signal strength (RSS) becomes a research hotspot due to its low cost and ease of deployment characteristics. To further improve the performance of WiFi indoor positioning based on RSS, this paper aims to propose a novel position estimation strategy which is called radius-based domain clustering (RDC). This domain clustering technology aims to avoid the issue of access point (AP) selection.
Design/methodology/approach
The proposed positioning approach uses each individual AP of all available APs to estimate the position of target point. Then, according to circular error probable, the authors search the decision domain which has the 50 per cent of the intermediate position estimates and minimize the radius of a circle via a RDC algorithm. The final estimate of the position of target point is obtained by averaging intermediate position estimates in the decision domain.
Findings
Experiments are conducted, and comparison between the different position estimation strategies demonstrates that the new method has a better location estimation accuracy and reliability.
Research limitations/implications
Weighted k nearest neighbor approach and Naive Bayes Classifier method are two classic position estimation strategies for location determination using WiFi fingerprinting. Both of the two strategies are affected by AP selection strategies and inappropriate selection of APs may degrade positioning performance considerably.
Practical implications
The RDC positioning approach can improve the performance of WiFi indoor positioning, and the issue of AP selection and related drawbacks is avoided.
Social implications
The RSS-based effective WiFi indoor positioning system can makes up for the indoor positioning weaknesses of global navigation satellite system. Many indoor location-based services can be encouraged with the effective and low-cost positioning technology.
Originality/value
A novel position estimation strategy is introduced to avoid the AP selection problem in RSS-based WiFi indoor positioning technology, and the domain clustering technology is proposed to obtain a better accuracy and reliability.
Details
Keywords
Weining Qi, Hongyi Yu, Jinya Yang and Xia Zhang
Abstract‐CEDAR protocol is a distributed routing protocol oriented to Quality of Service (QoS) support in MANET, and bandwidth is the QoS parameter of interest in this protocol…
Abstract
Abstract‐CEDAR protocol is a distributed routing protocol oriented to Quality of Service (QoS) support in MANET, and bandwidth is the QoS parameter of interest in this protocol. However, without energy efficiency consideration, earlier node failure will occur in overloaded nodes in CEDAR, and in turn may lead to network partitioning and reduced network lifetime. The storage and processing overhead of CEDAR is fairly high because too many kinds of control packets are exchanged between nodes and too much state information needs to be maintained by core nodes. The routing algorithm depends fully on the link state information known by core nodes. But the link state information may be imprecise, which will result in route failures. In this paper, we present an improved energy efficient CEDAR protocol, and propose a new efficient method of bandwidth calculation. Simulation results show that the improved CEDAR is efficient in terms of packet delivery ratio, throughput and mean‐square error of energy.
Details
Keywords
He Huang, Weining Wang and Yujie Yin
This study aims to focus on the clothing recycling supply chain and aims to provide optimal decisions and managerial insights into supply chain strategies, thereby facilitating…
Abstract
Purpose
This study aims to focus on the clothing recycling supply chain and aims to provide optimal decisions and managerial insights into supply chain strategies, thereby facilitating the sustainable development of the clothing industry.
Design/methodology/approach
Based on previous single- and dual-channel studies, game theory was employed to analyze multiple recycling channels. Concurrently, clothing consumer types were integrated into the analytical models to observe their impact on supply chain strategies. Three market scenarios were modeled for comparative analysis, and numerical experiments were conducted.
Findings
The intervention of fashion retailers in the clothing recycling market has intensified competition across the entire market. The proportions of various consumer types, their preferences for online platforms and their preference for the retailer’s channel influence the optimal decisions and profits of supply chain members. The diversity of recycling channels may enhance the recycling volume of clothes; however, it should meet certain conditions.
Originality/value
This study extends the existing theory from a channel dimension by exploring multiple channels. Furthermore, by investigating the classifications of clothing consumers and their influence on supply chain strategies, the theory is enhanced from the consumer perspective.
Details
Keywords
Sridevi P, Saikiran Niduthavolu and Lakshmi Narasimhan Vedanthachari
The purpose of this paper is to design organization message content strategies and analyse their information diffusion on the microblogging website, Twitter.
Abstract
Purpose
The purpose of this paper is to design organization message content strategies and analyse their information diffusion on the microblogging website, Twitter.
Design/methodology/approach
Using data from 29 brands and 9392 tweets, message strategies on twitter are classified into four strategies. Using content analysis all the tweets are classified into informational strategy, transformational strategy, interactional strategy and promotional strategy. Additionally, the information diffusion for the developed message strategies was explored. Furthermore, message content features such as text readability features, language features, Twitter-specific features, vividness features on information diffusion are analysed across message strategies. Additionally, the interaction between message strategies and message features was carried out.
Findings
Finding reveals that informational strategies were the dominant message strategy on Twitter. The influence of text readability features language features, Twitter-specific features, vividness features that influenced information diffusion varied across four message strategies.
Originality/value
This study offers a completely novel way for effectively analysing information diffusion for branded tweets on Twitter and can show a path to both researchers and practitioners for the development of successful social media marketing strategies.
Details
Keywords
This paper aims to examine the time it would take to provide medical prophylaxis for a large urban population in the wake of an airborne anthrax attack and the effect that various…
Abstract
Purpose
This paper aims to examine the time it would take to provide medical prophylaxis for a large urban population in the wake of an airborne anthrax attack and the effect that various parameters have on the total logistical time.
Design/methodology/approach
A mathematical model that evaluates key parameters and suggests alternatives for improvement is formulated. The objective of the model is to minimize the total logistical time required for prophylaxis by balancing three cycles as follows: the loading cycle, the shipping cycle and the service cycle.
Findings
Applying the model to two representative cases reveals the effect of various parameters on the process. For example, the number of distribution centers and the number of servers in each center are key parameters, whereas the number of central depots and the local shipping method is less important.
Research limitations/implications
Various psychological factors such as mass panic are not included in the model.
Originality/value
There are few papers analyzing the logistical response to an anthrax attack, and most focus mainly on the strategic level. The study deals with the tactical logistical level. The authors focus on the distribution process of prophylaxis and other medical supplies during the crisis, analyze it and identify the parameters that influence the time between the detection of the attack and the provision of effective medical treatment to the exposed population.
Details
Keywords
Maximilian M. Spanner and Julia Wein
The purpose of this paper is to investigate the functionality and effectiveness of the Carbon Risk Real Estate Monitor (CRREM tool). The aim of the project, supported by the…
Abstract
Purpose
The purpose of this paper is to investigate the functionality and effectiveness of the Carbon Risk Real Estate Monitor (CRREM tool). The aim of the project, supported by the European Union’s Horizon 2020 research and innovation program, was to develop a broadly accepted tool that provides investors and other stakeholders with a sound basis for the assessment of stranding risks.
Design/methodology/approach
The tool calculates the annual carbon emissions (baseline emissions) of a given asset or portfolio and assesses the stranding risks, by making use of science-based decarbonisation pathways. To account for ongoing climate change, the tool considers the effects of grid decarbonisation, as well as the development of heating and cooling-degree days.
Findings
The paper provides property-specific carbon emission pathways, as well as valuable insight into state-of-the-art carbon risk assessment and management measures and thereby paves the way towards a low-carbon building stock. Further selected risk indicators at the asset (e.g. costs of greenhouse gas emissions) and aggregated levels (e.g. Carbon Value at Risk) are considered.
Research limitations/implications
The approach described in this paper can serve as a model for the realisation of an enhanced tool with respect to other countries, leading to a globally applicable instrument for assessing stranding risks in the commercial real estate sector.
Practical implications
The real estate industry is endangered by the downside risks of climate change, leading to potential monetary losses and write-downs. Accordingly, this approach enables stakeholders to assess the exposure of their assets to stranding risks, based on energy and emission data.
Social implications
The CRREM tool reduces investor uncertainty and offers a viable basis for investment decision-making with regard to stranding risks and retrofit planning.
Originality/value
The approach pioneers a way to provide investors with a profound stranding risk assessment based on science-based decarbonisation pathways.
Details
Keywords
Xuelei Yang, Hangbiao Shang, Weining Li and Hailin Lan
Based on the socio-emotional wealth and agency theories, this study empirically investigates the impact of family ownership and management on green innovation (GI) in family…
Abstract
Purpose
Based on the socio-emotional wealth and agency theories, this study empirically investigates the impact of family ownership and management on green innovation (GI) in family businesses, as well as the moderating effects of institutional environmental support factors, namely, the technological achievement marketisation index and the market-rule-of law index.
Design/methodology/approach
This study empirically tests the hypotheses based on a sample of listed Chinese family companies with A-shares in 14 heavily polluting industries from 2009 to 2019.
Findings
There is a U-shaped relationship between the percentage of family ownership and GI, and an inverted U-shaped relationship between the degree of family management and GI. Additionally, different institutional environmental support factors affect these relationships in different ways. As the technological achievement marketisation index increases, the U-shaped relationship between the percentage of family ownership and GI becomes steeper, while the inverted U-shaped relationship between the degree of family management and GI becomes smoother. The market rule-of-law index weakens the U-shaped relationship between family ownership and GI.
Originality/value
First, the authors enrich the research on the driving factors of GI from the perspective of the most essential heterogeneity of family businesses. This study shows nonlinear and opposite effects of family ownership and management on GI in family firms. Second, this study contributes to the literature on family firm innovation. GI, not considered by researchers, is regarded as an important deficiency in research on innovation in family businesses. Therefore, this study fills that gap. Third, the study expands research on moderating effects in the literature on GI from the perspective of institutional environmental support factors.
Details