Investigating Ultra-Fast LUT-based Neural Networks for FPGAs
Project Description
Look-Up Table (LUT) based neural networks on FPGAs achieve ultra-low latency and high energy efficiency, but often suffer from severe routing congestion and suboptimal logic resource utilization. This project aims to explore a hardware-software co-designed method specifically tailored for LUT-based neural networks. Students will be tasked with designing an algorithmic method to optimize a pre-trained LUT-based neural network, as well as implementing and evaluating it on FPGA hardware. This work will contribute directly to the development of ultra-fast AI solutions.
Supervisor
ZHANG, Wei
Quota
2
Course type
UROP1100
Applicant's Roles
1. Conduct literature reviews on existing optimization methods for LUT-based neural networks.
2. Assist in design and implementation of algorithmic optimization technique.
3. Analyze the latency, energy efficiency, and performance of the hardware implementation on FPGAs.
Applicant's Learning Objectives
1. Study fundamental knowledge in LUT-based neural networks, including basic operations, training methods, and FPGA architecture.
2. Gain practical experience in software-hardware co-optimization methodologies.
3. Explore the latest trends and frontiers of modern AI accelerations.
Complexity of the project
Moderate