Search results
1 – 9 of 9Abstract
Purpose
With the growing system‐on‐a‐chip (SOC) design complexity, SOC verification has become a major congestion. In this context, efficient and reliable verification environment is requested for SOC design before it is committed to production. The purpose of this paper is to judge whether the hardware and or software (HW/SW) co‐verification environment can handle SOC verification and provide the necessary performance in terms of co‐verification speed and throughput, power and resource consumption, timing analysis, etc.
Design/methodology/approach
A finite‐impulse‐response filter is utilized as a device‐under‐test to compare pure SW simulation, Modelsim simulator in this case, and HW/SW co‐verification approaches to decide on whether the HW/SW co‐verification environment can do work or not. In addition, the performance of the HW/SW co‐verification environment is estimated based on specifications such as co‐verification speed and throughput, power and resource consumption, and timing analysis.
Findings
From experiment results, conclusions can be drawn that the more complicated SOC is, the greater the potential speedup of the co‐verification approach over SW simulation is. However, the communication between SW and HW in HW/SW co‐verification system is a major congestion, which may offset the acceleration achieved by moving large computation from the SW to the HW side.
Originality/value
Performance estimation for the HW/SW co‐verification environment has been conducted in terms of co‐verification speed and throughput, power and resource consumption, timing analysis, etc.
Details
Keywords
Abstract
Purpose
Increasingly complex and sophisticated VLSI design, coupled with shrinking design cycles, requires shorter verification time and efficient debug method. Logic simulation provides SoC verification with full controllability and observability, but it suffers from very slow simulation speed for complex design. Using hardware emulation such as FPGA can have higher simulation speed. However, it is very hard to debug due to its poor visibility. SOC HW/SW co‐verification technique seems to draw a balance, but Design Under Test (DUT) still resides in FPGA and remains hard for debugging. The purpose of this paper is to study a run‐time RTL debugging methodology for a FPGA‐based co‐verification system.
Design/methodology/approach
The debugging tools are embedded in HDL simulator using Verilog VPI callback, so signals of testbench and internal nodes of DUT can be observed in a single waveform and updated as simulation runs, making debugging more efficient. The proposed debugging method connects internal nodes directly to a PCI‐extended bus, instead of inserting extra scan‐chain logic, so the overhead for area is reduced.
Findings
This method provides internal nodes probing on an event‐driven co‐verification platform and achieves full observability for DUT. The experiment shows that, compared with a similar method, the area overhead for debug logic is reduced by 30‐50 per cent and compile time is shortened by 40‐70 per cent.
Originality/value
The proposed debugging technique achieves 100 per cent observability and can be applied to both RTL and gate‐level verification. The debugging tool is embedded into HDL simulator using Verilog VPI callback, so DUT signals are displayed together with testbench signals in the same waveform viewer. New value of DUT signal is read from FPGA whenever it changes, which allows run‐time debug.
Details
Keywords
Abstract
Purpose
Traditionally, each time a new design for a finite impulse response (FIR) filter is required, a new algorithm has to be developed specially for the FIR filter. Furthermore, corresponding hardware architecture must be designed specially to meet the requirement of the FIR specifications. The purpose of this paper is to propose an arithmetic logic unit (ALU)‐based universal FIR filter suitable for realization in field programmable gate arrays (FPGA), where various FIR filters can be implemented just by programming instructions in the ROM with identical hardware architecture.
Design/methodology/approach
Rather than multiplier‐accumulator‐based architecture for conventional FIR, the proposed ALU architecture implements the FIR functions by using accumulators and shift‐registers controlled by the instructions of ROM. Furthermore, time division multiplexing access (TDMA) technique is employed to reduce the chip size. In addition, the proposed FIR architecture is verified in a SOC hardware and/or software co‐emulation system.
Findings
An ALU‐based universal FIR filter suitable for realization in FPGA is designed and verified in a SOC hardware/software co‐emulation system with example of a 64‐tap FIR filter design.
Originality/value
A software‐based design method as well as TDMA scheme for the ALU‐based FIR filter are introduced, making FIR filter architecture universal, programmable, and consuming less FPGA resources.
Details
Keywords
The purpose of this study is to examine supplier–customer capabilities in solution co-creation and how they are matched from a relational process perspective.
Abstract
Purpose
The purpose of this study is to examine supplier–customer capabilities in solution co-creation and how they are matched from a relational process perspective.
Design/methodology/approach
Using a qualitative approach, the authors identified 20 sets of supplier–customer capability matches by conducting in-depth interviews with 34 matched informants and retrieving suppliers’ archival data (project documents and success stories).
Findings
The authors identified 20 capability matching sets (21 supplier and 23 customer capabilities) and developed a process-based model of bilateral capabilities that match at the organizational level in solution co-creation. The authors reveal their match forms (complementarity and compatibility) and offer suggestions for future research.
Research limitations/implications
This paper is qualitative; quantitative studies are required for testing and extending the initial conclusions.
Practical implications
This study guides the supplier and customer to cultivate different capabilities at different stages of solution co-creation and alerts them to the importance of capability complementarity and compatibility.
Originality/value
To the best of the authors’ knowledge, this study is the first to introduce the bilateral perspective into dynamic capability research in the context of solution co-creation. The authors discuss the abilities the supplier and customer must possess at different stages and how they match dynamically. The analysis extends the research on solution-specific capabilities and dynamic matching, offering useful implications for solution co-creation in practice.
Details
Keywords
Geri A. Dino, Kimberly A. Horn and Heather Meit
Presents findings from the pilot study of a gender‐sensitive adolescent smoking cessation programme called Not On Tobacco (N‐O‐T). N‐O‐T is a school‐based programme designed to…
Abstract
Presents findings from the pilot study of a gender‐sensitive adolescent smoking cessation programme called Not On Tobacco (N‐O‐T). N‐O‐T is a school‐based programme designed to help teenagers stop smoking or reduce cigarette use among those who are unable to quit completely. A total of 29 adolescents from three high schools in West Virginia participated (19 females and 10 males ranging between 14 and 18 years old). Smoking abstinence was measured using self‐report and was verified by exhaled carbon monoxide (CO) readings. At three months post baseline, total abstinence for programme participants was 22 per cent and reduction rates ranged from 30 per cent to 96 per cent. At four months post‐baseline, 44 per cent of the boys and 14 per cent of the girls reported being smoke free. Findings from this pilot study suggest that N‐O‐T warrants further investigation and redesign with emphasis on more highly prescribed, gender‐sensitive intervention strategies. Consequently, a completely new programme has been developed and is currently being evaluated.
Details
Keywords
Abstract
Purpose
With the rapid development of integrated circuits, verification of SOC chips has become a great challenge due to its integration and complexity. Traditional software‐based simulation methodology cannot meet verification needs. Therefore, FPGA‐based hardware acceleration technologies are requested in SOC verification. The classic methodology of hardware acceleration downloads the DUT (Device under Test) to the FPGA, while part of RTL codes and test bench is still run on the simulator in the workstation. Research found that the speed bottleneck of this methodology is mostly caused by the ping‐pong mode of data transmission between workstation software and the FPGA emulator, thus resulting in that channel transmission time takes too much proportion of total time. The purpose of this paper is to present a vector mode based hardware/software co‐emulation methodology, which leverages a pipeline structure to transmit, receive and buffer data. This methodology reduces the communication overhead by carrying out a parallel mechanism in that while user's design is under test in the emulator, signal data are transmitting in the channel simultaneously, thus increasing the speed of hardware acceleration and emulation.
Design/methodology/approach
The methodology of hardware acceleration proposed by this paper intercepts data for once from the emulation process of a traditional platform as test bench and utilizes direct memory access (DMA) channel to speed up data transfer, as well as increasing reasonable data caching mechanism, which reduces the ratio of channel transmission time in the entire emulation time, achieving accelerating emulation.
Findings
The proposed methodology and traditional hardware acceleration approach were tested on a quasi‐cyclic low‐density parity‐check (LDPC) decoder. Experiment results indicate that the proposed method can increase communication throughput 140 times compared with the traditional approach.
Originality/value
A vector mode based hardware/software co‐emulation methodology is presented in the paper. Higher communication throughput can be achieved by carrying out a parallel mechanism, as well as leveraging a pipeline structure to transmit, receive and buffer data.
Details
Keywords
There are a lot of applications available for infrared focal plane array (IRFPA). Thus, advanced functions are required to support a wide range of IRFPAs applications. The purpose…
Abstract
Purpose
There are a lot of applications available for infrared focal plane array (IRFPA). Thus, advanced functions are required to support a wide range of IRFPAs applications. The purpose of this paper is to present a control circuit for a user reconfigurable 320×256 readout integrated circuit (ROIC) designed for IRFPA applications.
Design/methodology/approach
In order to implement reconfigurable ROIC, several advanced functions can be realized by the control circuit such as global reset, capacitive transimpedance amplifier (CTIA) gain selectable, CTIA bandwidth selectable, random access opening (RAO), dynamic image transposition, selectable outputs, and adjustable power dissipation. These advanced functions can be implemented by loading corresponding control words into a 60‐bit control register. There are seven types of control words available with 14‐bit control words reserved for the realization of other functions in the future to control corresponding seven advanced functions.
Findings
Design and simulation of the control circuit based on CSMC 0.5 μm process technology have been conducted to confirm these functions. Based on these functions, wide scene dynamic range can be achieved, and the application of ROIC is more flexible.
Originality/value
This paper describes more functions such as CTIA bandwidth selectable, global reset, and also improved functions of RAO.
Details