Search results

1 – 10 of over 4000
Article
Publication date: 10 March 2021

Afshan Amin Khan, Roohie Naaz Mir and Najeeb-Ud Din

This work focused on a basic building block of an allocation unit that carries out the critical job of deciding between the conflicting requests, i.e. an arbiter unit. The purpose…

Abstract

Purpose

This work focused on a basic building block of an allocation unit that carries out the critical job of deciding between the conflicting requests, i.e. an arbiter unit. The purpose of this work is to implement an improved hybrid arbiter while harnessing the basic advantages of a matrix arbiter.

Design/methodology/approach

The basic approach of the design methodology involves the extraction of traffic information from buffer signals of each port. As the traffic arrives in the buffer of respective ports, information from these buffers acts as a source of differentiation between the ports receiving low traffic rates and ports receiving high traffic rates. A logic circuit is devised that enables an arbiter to dynamically assign priorities to different ports based on the information from buffers. For implementation and verification of the proposed design, a two-stage approach was used. Stage I comprises comparing the proposed arbiter with other arbiters in the literature using Vivado integrated design environment platform. Stage II demonstrates the implementation of the proposed design in Cadence design environment for application-specific integrated chip level implementation. By using such a strategy, this study aims to have a special focus on the feasibility of the design for very large-scale integration implementation.

Findings

According to the simulation results, the proposed hybrid arbiter maintains the advantage of a basic matrix arbiter and also possesses the additional feature of fault-tolerant traffic awareness. These features for a hybrid arbiter are achieved with a 19% increase in throughput, a 1.5% decrease in delay and a 19% area increase in comparison to a conventional matrix arbiter.

Originality/value

This paper proposes a traffic-aware mechanism that increases the throughput of an arbiter unit with some area trade-off. The key feature of this hybrid arbiter is that it can assign priorities to the requesting ports based upon the real-time traffic requirements of each port. As a result of this, the arbiter is dynamically able to make arbitration decisions. Now because buffer information is valuable in winning the priority, the presence of a fault-tolerant policy ensures that none of the priority is assigned falsely to a requesting port. By this, wastage of arbitration cycles is avoided and an increase in throughput is also achieved.

Article
Publication date: 26 May 2021

Yuhan Luo and Mingwei Lin

The purpose of this paper is to make an overview of 474 publications and 512 patents of FTL from 1987 to 2020 in order to provide a conclusive and comprehensive analysis for…

Abstract

Purpose

The purpose of this paper is to make an overview of 474 publications and 512 patents of FTL from 1987 to 2020 in order to provide a conclusive and comprehensive analysis for researchers in this field, as well as a preliminary knowledge of FTL for interested researchers.

Design/methodology/approach

Firstly, the FTL algorithms are classified and its functions are introduced in detail. Secondly, the structures of the publications are analyzed in terms of the fundamental information and the publication of the most productive countries/regions, institutions and authors. After that, co-citation networks of institutions, authors and papers illustrated by VOS Viewer are given to show the relationship among those and the most influential of them is further analyzed. Then, the characteristics of the patent are analyzed based on the basic information and classification of the patent and the most productive inventors. In order to obtain research hotspots and trends in this field, the time-line review and citation burst detection of keywords carried out by Cite Space are made to be visual. Finally, based on the above analysis, it draws some other important conclusions and the development trend of this field.

Findings

The research on FTL algorithm is still the top priority in the future, and how to improve the performance of SSD in the era of big data is one of the research hotspots.

Research limitations/implications

This paper makes a comprehensive analysis of FTL with the method of bibliometrics, and it is valuable for researchers can quickly grasp the hotspots in this area.

Originality/value

This article draws the structural characteristics of the publications in this field and summarizes the research hotspots and trends in this field in recent years, aiming to inspire new ideas for researchers.

Details

International Journal of Intelligent Computing and Cybernetics, vol. 14 no. 3
Type: Research Article
ISSN: 1756-378X

Keywords

Article
Publication date: 9 January 2024

Kaizheng Zhang, Jian Di, Jiulong Wang, Xinghu Wang and Haibo Ji

Many existing trajectory optimization algorithms use parameters like maximum velocity or acceleration to formulate constraints. Due to the ignoring of the quadrotor actual…

Abstract

Purpose

Many existing trajectory optimization algorithms use parameters like maximum velocity or acceleration to formulate constraints. Due to the ignoring of the quadrotor actual tracking capability, the generated trajectories may not be suitable for tracking control. The purpose of this paper is to design an online adjustment algorithm to improve the overall quadrotor trajectory tracking performance.

Design/methodology/approach

The authors propose a reference trajectory resampling layer (RTRL) to dynamically adjust the reference signals according to the current tracking status and future tracking risks. First, the authors design a risk-aware tracking monitor that uses the Frenét tracking errors and the curvature and torsion of the reference trajectory to evaluate tracking risks. Then, the authors propose an online adjusting algorithm by using the time scaling method.

Findings

The proposed RTRL is shown to be effective in improving the quadrotor trajectory tracking accuracy by both simulation and experiment results.

Originality/value

Infeasible reference trajectories may cause serious accidents for autonomous quadrotors. The results of this paper can improve the safety of autonomous quadrotor in application.

Details

Robotic Intelligence and Automation, vol. 44 no. 1
Type: Research Article
ISSN: 2754-6969

Keywords

Book part
Publication date: 14 November 2011

Rolando Quintana and Mark T. Leung

Increasing competition within the global supply chain network has been pressuring managers to improve efficiencies of production systems while, at the same time, reduce…

Abstract

Increasing competition within the global supply chain network has been pressuring managers to improve efficiencies of production systems while, at the same time, reduce manufacturing operation expenses. One well-known approach is to have better control of the manufacturing system through more accurate forecasting and efficient control. In other words, a production control paradigm with more reliable forward visibility should help in maintaining a cost-effective yet lean manufacturing environment. Hence, this study proposes a predictive decision support system for controlling and managing complex production environments and demonstrates a Visual Interactive Simulation (VIS) framework for forecasting system performances given a designated set of production control parameters. The VIS framework is applied to a real-world manufacturing system in which the primary objective is to minimize total production while maintaining consistently high throughput and controlling work-in-process level. Through this case study, we demonstrate the use and validate the effectiveness of VIS in optimization and prediction of the examined production system. Results show that the predictive VIS framework leads to better and more reliable decision making on selection of control parameters for the manufacturing system under study. Statistical analyses are incorporated to further strengthen the VIS decision-making process.

Details

Advances in Business and Management Forecasting
Type: Book
ISBN: 978-0-85724-959-3

Article
Publication date: 21 May 2020

Osman Hürol Türkakın, Ekrem Manisalı and David Arditi

In smaller projects with limited resources, schedule updates are often not performed. In these situations, traditional delay analysis methods cannot be used as they all require…

Abstract

Purpose

In smaller projects with limited resources, schedule updates are often not performed. In these situations, traditional delay analysis methods cannot be used as they all require updated schedules. The objective of this study is to develop a model that performs delay analysis by using only an as-planned schedule and the expense records kept on site.

Design/methodology/approach

This study starts out by developing an approach that estimates activity duration ranges in a network schedule by using as-planned and as-built s-curves. Monte Carlo simulation is performed to generate candidate as-built schedules using these activity duration ranges. If necessary, the duration ranges are refined by a follow-up procedure that systematically relaxes the ranges and develops new as-built schedules. The candidate schedule that has the closest s-curve to the actual s-curve is considered to be the most realistic as-built schedule. Finally, the as-planned vs. as-built delay analysis method is performed to determine which activity(ies) caused project delay. This process is automated using Matlab. A test case is used to demonstrate that the proposed automated method can work well.

Findings

The automated process developed in this study has the capability to develop activity duration ranges, perform Monte Carlo simulation, generate a large number of candidate as-built schedules, build s-curves for each of the candidate schedules and identify the most realistic one that has an s-curve that is closest to the actual as-built s-curve. The test case confirmed that the proposed automated system works well as it resulted in an as-built schedule that has an s-curve that is identical to the actual as-built s-curve. To develop an as-built schedule using this method is a reasonable way to make a case in or out of a court of law.

Research limitations/implications

Practitioners specifying activity ranges to perform Monte Carlo simulation can be characterized as subjective and perhaps arbitrary. To minimize the effects of this limitation, this study proposes a method that determines duration ranges by comparing as-built and as-planned cash-flows, and then by systematically modifying the search space. Another limitation is the assumption that the precedence logic in the as-planned network remains the same throughout construction. Since updated schedules are not available in the scenario considered in this study, and since in small projects the logic relationships are fairly stable over the short project duration, the assumption of a stable logic throughout construction may be reasonable, but this issue needs to be explored further in future research.

Practical implications

Delays are common in construction projects regardless of the size of the project. The critical path method (CPM) schedules of many smaller projects, especially in developing countries, are not updated during construction. In case updated schedules are not available, the method presented in this paper represents an automated, practical and easy-to-use tool that allows parties to a contract to perform delay analysis with only an as-planned schedule and the expense logs kept on site.

Originality/value

Since an as-built schedule cannot be built without updated schedules, and since the absence of an as-built schedule precludes the use of any delay analysis method that is acceptable in courts of law, using the method presented in this paper may very well be the only solution to the problem.

Details

Engineering, Construction and Architectural Management, vol. 27 no. 10
Type: Research Article
ISSN: 0969-9988

Keywords

Article
Publication date: 30 April 2020

Mehdi Darbandi, Amir Reza Ramtin and Omid Khold Sharafi

A set of routers that are connected over communication channels can from network-on-chip (NoC). High performance, scalability, modularity and the ability to parallel the structure…

Abstract

Purpose

A set of routers that are connected over communication channels can from network-on-chip (NoC). High performance, scalability, modularity and the ability to parallel the structure of the communications are some of its advantages. Because of the growing number of cores of NoC, their arrangement has got more valuable. The mapping action is done based on assigning different functional units to different nodes on the NoC, and the way it is done contains a significant effect on implementation and network power utilization. The NoC mapping issue is one of the NP-hard problems. Therefore, for achieving optimal or near-optimal answers, meta-heuristic algorithms are the perfect choices. The purpose of this paper is to design a novel procedure for mapping process cores for reducing communication delays and cost parameters. A multi-objective particle swarm optimization algorithm standing on crowding distance (MOPSO-CD) has been used for this purpose.

Design/methodology/approach

In the proposed approach, in which the two-dimensional mesh topology has been used as base construction, the mapping operation is divided into two stages as follows: allocating the tasks to suitable cores of intellectual property; and plotting the map of these cores in a specific tile on the platform of NoC.

Findings

The proposed method has dramatically improved the related problems and limitations of meta-heuristic algorithms. This algorithm performs better than the particle swarm optimization (PSO) and genetic algorithm in convergence to the Pareto, producing a proficiently divided collection of solving ways and the computational time. The results of the simulation also show that the delay parameter of the proposed method is 1.1 per cent better than the genetic algorithm and 0.5 per cent better than the PSO algorithm. Also, in the communication cost parameter, the proposed method has 2.7 per cent better action than a genetic algorithm and 0.16 per cent better action than the PSO algorithm.

Originality/value

As yet, the MOPSO-CD algorithm has not been used for solving the task mapping issue in the NoC.

Details

International Journal of Pervasive Computing and Communications, vol. 16 no. 2
Type: Research Article
ISSN: 1742-7371

Keywords

Article
Publication date: 3 August 2015

Michal Skorepa and Jakub Seidler

The purpose of this paper is to assist the numerous regulators around the globe who are currently considering ways to impose domestic systemic importance-based capital…

Abstract

Purpose

The purpose of this paper is to assist the numerous regulators around the globe who are currently considering ways to impose domestic systemic importance-based capital requirements on banks.

Design/methodology/approach

The article discusses in some detail a number of issues from the viewpoint of regulatory practice, mentioning relevant literature where available. Comments partly reflect the experience that the Czech National Bank gathered over the past two years while preparing its own regime of domestic systemic importance-based capital requirements on banks.

Findings

The authors stress, among other points, one weakness of the (otherwise well-designed) method suggested by the Basel Committee for Banking Supervision (BCBS) for assessment of banks’ systemic importance: the method is “relative” in that it does not reflect the absolute importance of the banking sector for the economy. The paper also explains that in some cases, use of individual-level rather than consolidated-level data may be preferable, in contrast to what the BCBS guidance suggests. Further, implications of the buffers over a longer term are pointed out.

Originality/value

As far as the authors are aware, this article is the first to comprehensively discuss the main issues surrounding both key steps (systemic importance assessment and determination of buffer level) in the process of introducing buffers based on domestic systemic importance. A number of questions related to these two steps are raised which regulators may appreciate to be reminded of, even if some of the questions are such that it is not possible to give a generally applicable answer to them.

Details

Journal of Financial Economic Policy, vol. 7 no. 3
Type: Research Article
ISSN: 1757-6385

Keywords

Article
Publication date: 1 October 1996

Simon F. Hurley

Buffers of work throughout a manufacturing facility enhance throughput. They protect a workstation against variations in processing times and against machine breakdowns of…

680

Abstract

Buffers of work throughout a manufacturing facility enhance throughput. They protect a workstation against variations in processing times and against machine breakdowns of upstream workstations. However, buffer management is still thought to be an open problem: first there is no algebraic way of representing the relationship between buffer size and throughput, and second, the combinatorial nature inherent in the buffer design problem makes it difficult to develop an exact solution. These problems still exist today, as evidenced by the number of research papers that present sophisticated mathematics to solve this complex problem. Refutes all the above points. The buffer management method detailed does not use sophisticated mathematics impenetrable by the average production manager. Presents a heuristically‐based buffer management method effective at protecting throughput. The method will have advantageous effects on the size of buffers and the length of the production lead times, while still providing protection of the throughput rate.

Details

International Journal of Operations & Production Management, vol. 16 no. 10
Type: Research Article
ISSN: 0144-3577

Keywords

Article
Publication date: 15 January 2021

Frank Goethals and Jennifer L. Ziegelmayer

The advent of extreme automation from new technologies such as artificial intelligence portends a massive increase in unemployment. The psychological impact of this threat on the…

Abstract

Purpose

The advent of extreme automation from new technologies such as artificial intelligence portends a massive increase in unemployment. The psychological impact of this threat on the workforce is critically important. This paper aims to examine the functioning of individuals' anxiety buffers in response to this threat.

Design/methodology/approach

A two-stage mixed-methods design is used. In stage 1, qualitative data are gathered through semi-structured interviews. In stage 2, quantitative data are collected through two experiments to assess the psychological impact of exposure to the threat.

Findings

Exposure to the threat of extreme automation reduces self-esteem, faith in the worldview and attachment security. When self-esteem and attachment security are under attack, they are ineffective as anxiety buffers, and anxiety levels increase. Additionally, there is a distal effect such that during a period of distraction, the threatened anxiety buffers are reinforced and return to their normal levels.

Research limitations/implications

This study is limited to a homogenous culture in which work is highly salient. Future research should include other cultures, other methods of exposure and further examine the distal effects.

Originality/value

The study examines the previously underexplored issue of individuals' psychological response to the impending changes in the workforce because of technological advancements.

Details

Information Technology & People, vol. 35 no. 1
Type: Research Article
ISSN: 0959-3845

Keywords

Open Access
Article
Publication date: 30 October 2023

Lisa Hedvall, Helena Forslund and Stig-Arne Mattsson

The purposes of this study were (1) to explore empirical challenges in dimensioning safety buffers and their implications and (2) to organise those challenges into a framework.

Abstract

Purpose

The purposes of this study were (1) to explore empirical challenges in dimensioning safety buffers and their implications and (2) to organise those challenges into a framework.

Design/methodology/approach

In a multiple-case study following an exploratory, qualitative and empirical approach, 20 semi-structured interviews were conducted in six cases. Representatives of all cases subsequently participated in an interactive workshop, after which a questionnaire was used to assess the impact and presence of each challenge. A cross-case analysis was performed to situate empirical findings within the literature.

Findings

Ten challenges were identified in four areas of dimensioning safety buffers: decision management, responsibilities, methods for dimensioning safety buffers and input data. All challenges had both direct and indirect negative implications for dimensioning safety buffers and were synthesised into a framework.

Research limitations/implications

This study complements the literature on dimensioning safety buffers with qualitative insights into challenges in dimensioning safety buffers and implications in practice.

Practical implications

Practitioners can use the framework to understand and overcome challenges in dimensioning safety buffers and their negative implications.

Originality/value

This study responds to the scarcity of qualitative and empirical studies on dimensioning safety buffers and the absence of any overview of the challenges therein.

Details

Journal of Manufacturing Technology Management, vol. 34 no. 9
Type: Research Article
ISSN: 1741-038X

Keywords

1 – 10 of over 4000