I. Introduction
Designing a modern hardware is becoming increasingly challenging due to the complexity of chips for applications such as IoT, AI, and Quantum Computing [1] . These intricate hardware designs are hard to test and verify, raising the risk of hidden bugs and vulnerabilities. One major reason is that existing verification and testing approaches often require the manual creation of assertions, data models, and test vectors [2] . Furthermore, some vulnerabilities may not affect all functionalities of design, making sole reliance on functional verification insufficient for ensuring system robustness and reliability [3] , [4] . Considering that flaws in hardware design can be primary sources of potential security vulnerabilities, it is essential to automatically identify and fix hardware bugs with minimal human intervention during the design phase. Fine-tuning Large Language Model (LLM)s for domain-specific tasks has seen successes in fields like medicine [5] and software design [6] . LLMs have been at the forefront of advancements in numerous software programming-related tasks, demonstrating their potential in automating tasks like auto code completion, malware detection, and code refactoring [7] .