Abstract:
The optimal design of power converters often requires a long time to process with a huge number of simulations to determine the optimal parameters. To reduce the design c...Show MoreMetadata
Abstract:
The optimal design of power converters often requires a long time to process with a huge number of simulations to determine the optimal parameters. To reduce the design cycle, this paper proposes a proximal policy optimization (PPO)-based model to optimize the design parameters for Buck and Boost converters. In each training step, the learning agent carries out an action that adjusts the value of the design parameters and interacts with a dynamic Simulink model. The simulation provides feedback on power efficiency and helps the learning agent in optimizing parameter design. Unlike deep Q-learning and standard actor-critic algorithms, PPO includes a clipped objective function and the function avoids the new policy from changing too far from the oldpolicy. This allows the proposed model to accelerate and stabilize the learning process. Finally, to show the effectiveness of the proposed method, the performance of different optimization algorithms is compared on two popular power converter topologies.
Date of Conference: 20-22 February 2023
Date Added to IEEE Xplore: 23 March 2023
ISBN Information: