Download PDFOpen PDF in browser

Efficient Stimuli Generation Using Reinforcement Learning in Design Verification

EasyChair Preprint 13423

4 pagesDate: May 23, 2024

Abstract

The increasing design complexity of System-on-Chips (SoCs) has led to significant verification challenges, particularly in meeting coverage targets within a timely manner. At present, coverage closure is heavily dependent on constrained random and coverage driven verification methodologies where the randomized stimuli are bounded to verify certain scenarios and to reach coverage goals. This process is said to be exhaustive and to consume a lot of project time. In this paper, a novel methodology is proposed to generate efficient stimuli with the help of Reinforcement Learning (RL) to reach the maximum code coverage of the Design Under Verification (DUV). Additionally, an automated framework is created using metamodeling to generate a SystemVerilog testbench and an RL environment for any given design. The proposed approach is applied to various designs and the produced results proved that the RL agent provides effective stimuli to achieve code coverage faster in comparison with baseline random simulations. Furthermore, various RL agents and reward schemes are analyzed in our work.

Keyphrases: Design Verification, Metamodeling, Reinforcement Learning, code coverage

BibTeX entry
BibTeX does not have the right entry for preprints. This is a hack for producing the correct reference:
@booklet{EasyChair:13423,
  author    = {Deepak Narayan Gadde and Thomas Nalapat and Aman Kumar and Djones Lettnin and Wolfgang Kunz and Sebastian Simon},
  title     = {Efficient Stimuli Generation Using Reinforcement Learning in Design Verification},
  howpublished = {EasyChair Preprint 13423},
  year      = {EasyChair, 2024}}
Download PDFOpen PDF in browser