Abstract:
Machine learning has recently gained traction as a way to overcome the slow accelerator generation and implementation process on an FPGA. It can be used to build performa...Show MoreMetadata
Abstract:
Machine learning has recently gained traction as a way to overcome the slow accelerator generation and implementation process on an FPGA. It can be used to build performance and resource usage models that enable fast early-stage design space exploration. However, these models suffer from three main limitations. First, training requires large amounts of data (features extracted from design synthesis and implementation tools), which is cost-inefficient because of the time-consuming accelerator design and implementation process. Second, a model trained for a specific environment cannot predict performance or resource usage for a new, unknown environment. In a cloud system, renting a platform for data collection to build an ML model can significantly increase the total-cost-ownership (TCO) of a system. Third, ML-based models trained using a limited number of samples are prone to overfitting. To overcome these limitations, we propose LEAPER, a transfer learning-based approach for prediction of performance and resource usage in FPGA-based systems. The key idea of LEAPER is to transfer an ML-based performance and resource usage model trained for a low-end edge environment to a new, high-end cloud environment to provide fast and accurate predictions for accelerator implementation. Experimental results show that LEAPER (1) provides, on average across six workloads and five FPGAs, 85% accuracy when we use our transferred model for prediction in a cloud environment with 5-shot learning and (2) reduces design-space exploration time for accelerator implementation on an FPGA by 10×, from days to only a few hours.
Date of Conference: 23-26 October 2022
Date Added to IEEE Xplore: 19 December 2022
ISBN Information:
ISSN Information:
Funding Agency:
Keywords assist with retrieval of results and provide a means to discovering other relevant content. Learn more.
- IEEE Keywords
- Index Terms
- Transfer Learning ,
- FPGA-based System ,
- Machine Learning ,
- Machine Learning Models ,
- Resource Usage ,
- Unknown Environment ,
- Cloud Environment ,
- Cloud System ,
- Design Space Exploration ,
- Experimental Design ,
- Training Data ,
- Artificial Neural Network ,
- Nonlinear Model ,
- Model Building ,
- Target Model ,
- Gradient Boosting ,
- Base Learners ,
- Digital Signal Processing ,
- Cloud Platform ,
- Target Environment ,
- High-level Synthesis ,
- Jensen-Shannon Divergence ,
- Mean Relative Error ,
- Optimal Option ,
- Multiple Kernel ,
- ML-based Methods ,
- Target Platform ,
- Weak Learners ,
- Few-shot Learning
- Author Keywords
Keywords assist with retrieval of results and provide a means to discovering other relevant content. Learn more.
- IEEE Keywords
- Index Terms
- Transfer Learning ,
- FPGA-based System ,
- Machine Learning ,
- Machine Learning Models ,
- Resource Usage ,
- Unknown Environment ,
- Cloud Environment ,
- Cloud System ,
- Design Space Exploration ,
- Experimental Design ,
- Training Data ,
- Artificial Neural Network ,
- Nonlinear Model ,
- Model Building ,
- Target Model ,
- Gradient Boosting ,
- Base Learners ,
- Digital Signal Processing ,
- Cloud Platform ,
- Target Environment ,
- High-level Synthesis ,
- Jensen-Shannon Divergence ,
- Mean Relative Error ,
- Optimal Option ,
- Multiple Kernel ,
- ML-based Methods ,
- Target Platform ,
- Weak Learners ,
- Few-shot Learning
- Author Keywords