Search results
1 – 10 of over 4000Chunyan Zeng, Dongliang Zhu, Zhifeng Wang, Zhenghui Wang, Nan Zhao and Lu He
Most source recording device identification models for Web media forensics are based on a single feature to complete the identification task and often have the disadvantages of…
Abstract
Purpose
Most source recording device identification models for Web media forensics are based on a single feature to complete the identification task and often have the disadvantages of long time and poor accuracy. The purpose of this paper is to propose a new method for end-to-end network source identification of multi-feature fusion devices.
Design/methodology/approach
This paper proposes an efficient multi-feature fusion source recording device identification method based on end-to-end and attention mechanism, so as to achieve efficient and convenient identification of recording devices of Web media forensics.
Findings
The authors conducted sufficient experiments to prove the effectiveness of the models that they have proposed. The experiments show that the end-to-end system is improved by 7.1% compared to the baseline i-vector system, compared to the authors’ previous system, the accuracy is improved by 0.4%, and the training time is reduced by 50%.
Research limitations/implications
With the development of Web media forensics and internet technology, the use of Web media as evidence is increasing. Among them, it is particularly important to study the authenticity and accuracy of Web media audio.
Originality/value
This paper aims to promote the development of source recording device identification and provide effective technology for Web media forensics and judicial record evidence that need to apply device source identification technology.
Details
Keywords
Starting from the end‐to‐end principle, a founding element of the internet's technical architecture, the paper aims to discuss its extension and effects at the social level. It…
Abstract
Purpose
Starting from the end‐to‐end principle, a founding element of the internet's technical architecture, the paper aims to discuss its extension and effects at the social level. It shows how the internet moves power from governments and private entities to individual citizens, restructuring our societies and creating a new global stakeholder class – individual users of the internet. It connects the advent of this stakeholder class with a traditional principle of internet governance, “rough consensus”. It discusses advantages and risks of this change, suggesting that this shift of power might be beneficial to solve deadlocks in the governance of global phenomena and to ensure that solutions pursue the global public interest. Finally, it discusses how this social evolution can be protected from opposing forces, countering the opinion that the freedom of the internet is intrinsic and not needing regulatory supports.
Design/methodology/approach
The paper builds upon observation of case studies, such as the struggle between the industry and users over peer‐to‐peer music downloads, and upon the author's first‐hand experience in global internet governance processes.
Findings
The paper formalizes a social expression of the end‐to‐end principle and demonstrates the need for such principle to be recognized and protected by regulation, to preserve the social model described in the paper and its benefits.
Originality/value
The paper explores the connections between the technical, economic and social architectures of the global network, providing support for understanding the political dynamics of the internet and other global phenomena, and for designing effective governance processes to address them.
Details
Keywords
Patrick Rigot-Muller, Chandra Lalwani, John Mangan, Orla Gregory and David Gibbs
– The purpose of this paper is to illustrate an optimisation method, and resulting insights, for minimising total logistics-related carbon emissions for end-to-end supply chains.
Abstract
Purpose
The purpose of this paper is to illustrate an optimisation method, and resulting insights, for minimising total logistics-related carbon emissions for end-to-end supply chains.
Design/methodology/approach
The research is based on two real-life UK industrial cases. For the first case, several alternative realistic routes towards the UK are analysed and the optimal route minimising total carbon emissions is identified and tested in real conditions. For the second case, emissions towards several destinations are calculated and two alternative routes to southern Europe are compared, using several transport modes (road, Ro-Ro, rail and maritime). An adapted Value Stream Mapping (VSM) approach is used to map carbon footprint and calculate emissions; in addition Automatic Identification Systems (AIS) data provided information for vessel specification allowing the use of more accurate emission factors for each shipping leg.
Findings
The analysis of the first case demonstrates that end-to-end logistics-related carbon emissions can be reduced by 16-21 per cent through direct delivery to the UK as opposed to transhipment via a Continental European port. The analysis of the second case shows that deliveries to southern Europe have the highest potential for reduction through deliveries by sea. Both cases show that for distant overseas destinations, the maritime leg represents the major contributor to CO2 emissions in the end-to-end supply chain. It is notable that one of the main apportionment approaches (that of Defra in the UK) generate higher carbon footprints for routes using Ro-Pax vessels, making those not optimal. The feasibility of the optimal route was demonstrated with real-life data.
Originality/value
This research used real-life data from two UK companies and highlighted where carbon emissions are generated in the inbound and outbound transport chain, and how these can be reduced.
Details
Keywords
Nabeena Ameen, Najumnissa Jamal and Arun Raj
With the rapid growth of wireless sensor networks (WSNs), they have become an integral and substantial part of people's life. As such WSN stands as an assuring outlook, but…
Abstract
Purpose
With the rapid growth of wireless sensor networks (WSNs), they have become an integral and substantial part of people's life. As such WSN stands as an assuring outlook, but because of sensor's resource limitations and other prerequisites, optimal dual route discovery becomes an issue of concern. WSN along with central sink node is capable of handling wireless transmission, thus optimizing the network's lifetime by selecting the dual path. The major problem confronted in the application of security mechanisms in WSNs is resolving the issues amid reducing consumption of resources and increases security.
Design/methodology/approach
According to the proposed system, two metrics, namely, path length and packets delivery ratio are incorporated for identifying dual routes amid the source and destination. Thereafter by making use of the distance metric, the optimal dual route is chosen and data transmission is carried out amid the nodes. With the usage of the recommended routing protocol high packet delivery ratio is achieved with reduced routing overhead and low average end to end delay. It is clearly portrayed in the simulation output that the proposed on demand dual path routing protocol surpasses the prevailing routing protocol. Moreover, security is achieved make use of in accord the data compression reduces the size of the data. With the help of dual path, mathematical model of Finite Automata Theory is derived to transmit data from source to destination. Finite Automata Theory comprises Deterministic Finite Automata (DFA) that is being utilized for Dual Path Selection. In addition, data transition functions are defined for each input stage. In this proposed work, another mathematical model is 10; introduced to efficiently choose an alternate path between a receiver and transmitter for data transfer with qualified node as relay node using RR Algorithm. It also includes Dynamic Mathematical Model for Node Localization to improve the precision in location estimation using Node Localization Algorithm. As a result a simulator is built and various scenarios are elaborated for comparing the performance of the recommended dual path routing protocol with respect to the prevailing ones.
Findings
Reliability and fault-tolerance: The actual motive in utilizing the approach of multipath routing in sensor network was to offer path resilience in case of a node or link failures thus ascertaining reliable transmission of data. Usually in a fault tolerant domain, when the sensor node is unable to forward the data packets to the sink, alternative paths can be utilized for recovering its data packets during the failure of any link/node. Load balancing: Load balancing involves equalizing energy consumption of all the existing nodes, thereby degrading them together. Load balancing via clustering improves network scalability. The network's lifetime as well as reliability can be extended if varied energy level's nodes exist in sensor node. Quality of service (QoS): Improvement backing of quality of service with respect to the data delivery ratio, network throughput and end-to-end latency stands very significant in building multipath routing protocols for various network types. Reduced delay: There is a reduced delay in multipath routing since the backup routes are determined at the time of route discovery. Bandwidth aggregation: By dividing the data toward the same destination into multiple streams (by routing all to a separate path) can aggregate the effective bandwidth. The benefit being that, in case a node possesses many links with low bandwidth, it can acquire a bandwidth which is more compared to the individual link.
Research limitations/implications
Few more new algorithms can be used to compare the QoS parameters.
Practical implications
Proposed mechanism with feedback ascertains improvised delivery ratio compared to the single path protocol since in case of link failure, the protocol has alternative route. In case there are 50 nodes in the network, the detection mechanism yields packet delivery of 95% and in case there are 100 nodes, the packet delivery is lowered to 89%. It is observed that the packet rate in the network is more for small node range. When the node count is 200, the packet ratio is low, which is lowered to 85%. With a node count of 400, the curve depicts the value of 87%. Hence, even with a decrease in value, it is superior than the existing protocols. The average end-to-end delay represents the transmission delay of the data packets that have been successfully delivered as depicted in Figure 6 and Table 3. The recommended system presents the queue as well as the propagation delay from the source to destination. The figure depicts that when compared to the single path protocol, the end-to-end delay can be reduced via route switching. End-to-end delay signifies the time acquired for the delay in the receival of the the retransmitted packet by each node. The comparison reveals that the delay was lower compared to the existing ones in the WSN. Proposed protocol aids in reducing consumption of energy in transmitter, receiver and various sensors. Comparative analysis of energy consumptions of the sensor in regard to the recommended system must exhibit reduced energy than the prevailing systems.
Originality/value
On demand dual path routing protocol. Hence it is verified that the on demand routing protocol comprises DFA algorithms determines dual path. Here mathematical model for routing between two nodes with relay node is derived using RR algorithm to determine alternate path and thus reduce energy consumption. Another dynamic mathematical model for node localization is derived using localization algorithm. For transmitting data with a secure and promising QoS in the WSNs, the routing optimization technique has been introduced. The simulation software environment follows the DFA. The simulation yields in improvised performance with respect to packet delivery ratio, throughput, average end-to-end delay and routing overhead. So, it is proved that the DFA possesses the capability of optimizing the routing algorithms which facilitates the multimedia applications over WSNs.
Details
Keywords
Na Pang, Li Qian, Weimin Lyu and Jin-Dong Yang
In computational chemistry, the chemical bond energy (pKa) is essential, but most pKa-related data are submerged in scientific papers, with only a few data that have been…
Abstract
Purpose
In computational chemistry, the chemical bond energy (pKa) is essential, but most pKa-related data are submerged in scientific papers, with only a few data that have been extracted by domain experts manually. The loss of scientific data does not contribute to in-depth and innovative scientific data analysis. To address this problem, this study aims to utilize natural language processing methods to extract pKa-related scientific data in chemical papers.
Design/methodology/approach
Based on the previous Bert-CRF model combined with dictionaries and rules to resolve the problem of a large number of unknown words of professional vocabulary, in this paper, the authors proposed an end-to-end Bert-CRF model with inputting constructed domain wordpiece tokens using text mining methods. The authors use standard high-frequency string extraction techniques to construct domain wordpiece tokens for specific domains. And in the subsequent deep learning work, domain features are added to the input.
Findings
The experiments show that the end-to-end Bert-CRF model could have a relatively good result and can be easily transferred to other domains because it reduces the requirements for experts by using automatic high-frequency wordpiece tokens extraction techniques to construct the domain wordpiece tokenization rules and then input domain features to the Bert model.
Originality/value
By decomposing lots of unknown words with domain feature-based wordpiece tokens, the authors manage to resolve the problem of a large amount of professional vocabulary and achieve a relatively ideal extraction result compared to the baseline model. The end-to-end model explores low-cost migration for entity and relation extraction in professional fields, reducing the requirements for experts.
Details
Keywords
William Lehr and Lee W. McKnight
Delivering real‐time services (Internet telephony, video conferencing, and streaming media as well as business‐critical data applications) across the Internet requires end‐to‐end…
Abstract
Delivering real‐time services (Internet telephony, video conferencing, and streaming media as well as business‐critical data applications) across the Internet requires end‐to‐end quality of service (QoS) guarantees, which requires a hierarchy of contracts. These standardized contracts may be referred to as service level agreements (SLAs). SLAs provide a mechanism for service providers and customers to flexibly specify the service to be delivered. The emergence of bandwidth and service agents, traders, brokers, exchanges and contracts can provide an institutional and business framework to support effective competition. This article identifies issues that must be addressed by SLAs for consumer applications. We introduce a simple taxonomy for classifying SLAs based on the identity of the contracting parties. We conclude by discussing implications for public policy, Internet architecture, and competition.
Details
Keywords
End-to-end latency in network affects the overall performance in number of ways. It is one of the major tasks to minimize the end-to-end latency in cognitive radio ad hoc networks…
Abstract
Purpose
End-to-end latency in network affects the overall performance in number of ways. It is one of the major tasks to minimize the end-to-end latency in cognitive radio ad hoc networks (CRAHN), as the transmission of packets passes through every hop of the routing path. This paper aims to propose a new reactive multicast routing protocol, namely, improved frog leap inspired protocol (IFLIP) to reduce the overall end-to-end latency in CRAHN.
Design/methodology/approach
It is difficult to solve the problems that emerge in optimization. Routing is the procedure for choosing the best network path. This paper proposes a novel algorithm by improving the FLIP to use an ideal route, which progressively reduces the congestion level on various routing path by considering the spectrum accessibility and the service rate of each hop in CRAHN.
Findings
Result of this research work concludes that IFLIP significantly outperforms other baseline schemes (namely, TIGHT and Greedy TIGHT) in minimizing the end-to-end latency in CRAHN.
Originality/value
It is proved that IFLIP gives a better ratio of packet delivery under varying primary users and secondary users. IFLIP results in increased packet deliver ratio, reduced end-to-end latency and better throughput.
Yupei Wu, Di Guo, Huaping Liu and Yao Huang
Automatic defect detection is a fundamental and vital topic in the research field of industrial intelligence. In this work, the authors develop a more flexible deep learning…
Abstract
Purpose
Automatic defect detection is a fundamental and vital topic in the research field of industrial intelligence. In this work, the authors develop a more flexible deep learning method for the industrial defect detection.
Design/methodology/approach
The authors propose a unified framework for detecting defects in industrial products or planar surfaces based on an end-to-end learning strategy. A lightweight deep learning architecture for blade defect detection is specifically demonstrated. In addition, a blade defect data set is collected with the dual-arm image collection system.
Findings
Numerous experiments are conducted on the collected data set, and experimental results demonstrate that the proposed system can achieve satisfactory performance over other methods. Furthermore, the data equalization operation helps for a better defect detection result.
Originality/value
An end-to-end learning framework is established for defect detection. Although the adopted fully convolutional network has been extensively used for semantic segmentation in images, to the best knowledge of the authors, it has not been used for industrial defect detection. To remedy the difficulties of blade defect detection which has been analyzed above, the authors develop a new network architecture which integrates the residue learning to perform the efficient defect detection. A dual-arm data collection platform is constructed and extensive experimental validation are conducted.
Details
Keywords
Per Engelseth, Judith Molka-Danielsen and Brian E. White
The purpose of this paper is to question the applicability of recent industry-derived terms such as “Big Data” (BD) and the “Internet of things” (IoT) in a supply chain managerial…
Abstract
Purpose
The purpose of this paper is to question the applicability of recent industry-derived terms such as “Big Data” (BD) and the “Internet of things” (IoT) in a supply chain managerial context. Is this labeling useful in managing the operations found in supply chains?
Design/methodology/approach
BD and IoT are critically discussed in the context of a complete supply chain organization. A case study of banana supply from Costa Rica to Norway is provided to empirically ground this research. Thompson’s contingency theory, Alderson’s functionalistic end-to-end “marketing channels” model, Penrose’s view of supply purpose associated with service provision, and particularities of banana supply reveal how end-to-end supply chains are complex systems, even though the product distributed is fairly simple.
Findings
Results indicate that the usefulness of BD in supply chain management discourse is limited. Instead its connectivity is facilitated by what is now becoming commonly labeled as IoT, people, devices and documents that are useful when taking an end-to-end supply chain perspective. Connectivity is critical to efficient contemporary supply chain management.
Originality/value
BD and IoT have emerged as a part of contemporary supply chain management discourse. This study directs attention to the importance of scrutinizing emergent and actual discourse in managing supply chains, that it is not irrelevant which words are applied, e.g., in research on information-enabled supply process development. Often the old words of professional terminology may be sufficient or even better to help manage supply.
Details
Keywords
Darlington A. Akogo and Xavier-Lewis Palmer
Computer vision for automated analysis of cells and tissues usually include extracting features from images before analyzing such features via various machine learning and machine…
Abstract
Purpose
Computer vision for automated analysis of cells and tissues usually include extracting features from images before analyzing such features via various machine learning and machine vision algorithms. The purpose of this work is to explore and demonstrate the ability of a Convolutional Neural Network (CNN) to classify cells pictured via brightfield microscopy without the need of any feature extraction, using a minimum of images, improving work-flows that involve cancer cell identification.
Design/methodology/approach
The methodology involved a quantitative measure of the performance of a Convolutional Neural Network in distinguishing between two cancer lines. In their approach, they trained, validated and tested their 6-layer CNN on 1,241 images of MDA-MB-468 and MCF7 breast cancer cell line in an end-to-end fashion, allowing the system to distinguish between the two different cancer cell types.
Findings
They obtained a 99% accuracy, providing a foundation for more comprehensive systems.
Originality/value
Value can be found in that systems based on this design can be used to assist cell identification in a variety of contexts, whereas a practical implication can be found that these systems can be deployed to assist biomedical workflows quickly and at low cost. In conclusion, this system demonstrates the potentials of end-to-end learning systems for faster and more accurate automated cell analysis.
Details