Custom transformer implementation for edge

Warning The system is temporarily closed to updates for reporting purpose.

Esergun, Yunus and Öztürk, Özcan (2025) Custom transformer implementation for edge. In: 10th International Conference on Fog and Mobile Edge Computing (FMEC), Tampa, FL, USA

Full text not available from this repository. (Request a copy)

Abstract

Although deep learning models have greatly improved image processing capabilities, they can be difficult to implement in resource-constrained edge environments. This is mainly due to their high energy consumption and computational requirements. Transformers implement a hierarchical approach to interpret images, differentiating them from traditional convolutional approaches in computer vision. Locality-Sensitive Hashing (LSH) is a widely-used mechanism to use clustering and exploit computational similarities. In this paper, we implement a Transformer with LSH as a hardware accelerator for edge computing environments. Our preliminary results indicate that it is possible to achieve a 1.35x speedup and a 68% power reduction with a 0.55% accuracy loss.
Item Type: Papers in Conference Proceedings
Uncontrolled Keywords: Accelerator; Edge; FPGA; GPU; Inference; LSH; Power; Transformer
Divisions: Faculty of Engineering and Natural Sciences > Academic programs > Computer Science & Eng.
Faculty of Engineering and Natural Sciences > Academic programs > Electronics
Faculty of Engineering and Natural Sciences
Depositing User: Özcan Öztürk
Date Deposited: 01 Oct 2025 14:29
Last Modified: 01 Oct 2025 14:29
URI: https://research.sabanciuniv.edu/id/eprint/52837

Actions (login required)

View Item
View Item