CASL: Circuits & Systems Lab
Professor / Students / Research / Lectures / Papers / Patents / Awards / Contact / Recruiting
- Circuits and Systems for Deep Learning Technology
- Circuits and Systems for Communication Signal Processing
- Detection & Decoding for Next-Generation MIMO Systems
- Polar Decoder
- VLSI-Friendly Algorithm for Synchronization, Channel Estimation, and so on.
- Circuits and Systems for Multimedia Signal Processing
- VLSI for Sound Synthesis
- Real-Time Optical Distortion Correction Processor
- Low-Complexity De-Weathering Processor
- Hardware-Software Co-Design
- Embedded CPU
-
Computer Arithmetic
Resource-Efficient Inference Processor for BNN for Mobile Edge Devices
- Features
- XNOR-Net-Based Inference, Supporting Any Models for DNN & CNN
- Skipping Redundant Computations Based on Modified Block Structures
- Achievements
- 37.1x Speed-Up (vs. SW Inference)
- High Resource Efficiency: 41.45 MOP/s/LUT
- Successfully Fitted in a Low-Cost FPGA (Intel Cyclone V)
- Related Papers
- T. Kim et al., “A resource-efficient inference accelerator for binary convolutional neural networks in a low-cost FPGA,” IEEE Trans. CAS-II 2021.
- J. Shin et al., “Fast inference of binarized convolutional neural networks exploiting max pooling with modified block structure,” IEICE T. Info. 2019.
True-Single-Chip Dual-Core BNN Inference System
- Features
- True Single-Chip Implementation with On-Chip Flash Memory
- Cooperative-Working Dual Cores
- Output-Ch.-Wise Workload Division Technique
- Achievements
- 172.85 Images per Sec in a Tiny FPGA (Intel ® MAX ® 10)
- World-Best Resource/Energy Efficiency as of 2020 for FPGA-Based AI Accelerators
- Related Papers
- T. Kim et al., “IOTA: A 1.7-TOP/J inference processor for binary convolutional neural networks with 4.7K LUTs in a tiny FPGA,” EL 2020.
© 2024 CASL: Circuits & Systems Lab ― Powered by Jekyll and Textlog theme