Skip to content

m7md5303/AOHW25_185

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

58 Commits
 
 
 
 
 
 
 
 

Repository files navigation

Accelerated YOLO-Based Computer Vision System


The trend towards full autonomous cars has increased widely recently. Hence, the purpose of this project is to provide pure-hardware systems allowing for autonomous driving or serving any self-moving robot. Our work two main components are YOLO-based object detector for detecting cars and Sobel-based edge detector for detecting lanes. Both the systems are free from software with just employing hardware blocks. Reaching FPS of 67 for the YOLO block and about 800 FPS for the Sobel block, one is encouraged to employ these blocks in his system. This project is pretty suitable for edge-computing devices with performing heavy-computational tasks on the Programmable Logic (PL) of FPGAs with reasonable utilization.The description video is found on this Youtube Link

Team Members

Supervisor:

Institution

  • Electronics and Electrical Communications Department, Faculty of Engineering at Cairo University

Boards Used:

  • YOLO Block was deployed on: ZCU102 and PYNQ-Z2 (Applied different levels of set parallelism and clock frequency)
  • Sobel Block was deployed on: PYNQ-Z2

Software Needed:

  • Vivado 2022.2
  • Vitis IDE 2022.2
  • FINN Framework
  • PYNQ image v3.0.1

Repository Architecture:

YOLO System folder: This directory includes all the source files for implementing the YOLO Hardware System from scratch (Training Step)
It has two sub-directories ZCU102_HW and PYNQ-Z2_HW, where both includes either Vitis files in case of ZCU102 or the Jupyter Notebook required files in the case of PYNQ-Z2.

Lane System folder: This directory includes all the design files for the Lane block
It has one sub-directory: PYNQ-Z2_HW, which includes the required files for the Jupyter Notebook operation on this very design.

Steps to build:

YOLO System:

ZCU102:

Create new Vitis platform using the provided XSA file (yolo_zcu102.xsa) doc
Add the main.c file to the project sources and the provided testing image header file as well.
Create new debugging configuration (GDB) doc
After Clicking run, open Vivado for viewing the output waveform on the ILA

PYNQ-Z2:

Setup the PYNQ image for PYNQ-Z2 doc
Browse to the board address, upload the provided files to their specified locations in the notebook, then run all cells for watching the output
Provided testing image can be found with name img2.txt

  • You should note that the last cell is to be run twice due to issues related to the AXI DMA IP on Jupyter Notebooks.

Lane System:

PYNQ-Z2:

Setup the PYNQ image for PYNQ-Z2 doc
Browse to the board address, upload the provided files to their specified locations in the notebook, then run all cells for watching the output
Provided testing image can be found with name test1_crop.txt

  • You should note that the last cell is to be run twice due to issues related to the AXI DMA IP on Jupyter Notebooks.

References:

About

Repository for AMD open Hardware Competition with project title "Accelerated YOLO-Based Computer Vision System"

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors