Using FPGAs as accelerators on MCUs that perform floating-point arthmetic

Journal Title
Journal ISSN
Volume Title
Machine learning (ML) is a field under Artificial Intelligence (AI) that focuses on training software applications to make accurate predictions without being explicitly programmed by a human being. Because of the advantages of edge computing, such as reduced latency, improved security, and effective use of bandwidth, it is desirable to execute machine learning algorithms on embedded devices. However, embedded devices such as microcontrollers are constrained in processing power and memory. It may be beneficial to use the FPGA as an accelerator to the microcontroller unit, where computationally heavy operations will be offloaded to the FPGA to boost performance. This paper will explore the performance gains that can be attained when an FPGA is paired with a microcontroller that does floating-point arithmetic. From the tests, the system with the accelerator run at approximately twice the speed of the system comprising the MCU alone.
Capstone Project submitted to the Department of Engineering, Ashesi University in partial fulfillment of the requirements for the award of Bachelor of Science degree in Computer Engineering, May 2022