Redirecting to original paper in 30 seconds...
Click below to go immediately or wait for automatic redirect
📄 Abstract
Abstract: With the emergence of wearable devices and other embedded systems, deploying
large language models (LLMs) on edge platforms has become an urgent need.
However, this is challenging because of their high computational and memory
demands. Although recent low-bit quantization methods (e.g., BitNet, DeepSeek)
compress weights to as low as 1.58~bits with minimal accuracy loss, edge
deployment is still constrained by limited on-chip resources, power budgets,
and the often-neglected long latency of the prefill stage. We present
\textbf{TeLLMe}, the first table-lookup-based ternary LLM accelerator for
low-power edge FPGAs that fully supports both prefill and autoregressive
decoding using 1.58-bit weights and 8-bit activations. TeLLMe incorporates
several novel techniques, including (1) a table-lookup-based ternary matrix
multiplication (TLMM) engine utilizing grouped activations and online
precomputation for low resource utilization and high throughput; (2) a
fine-grained analytic URAM-based weight buffer management scheme for efficient
loading and compute engine access; (3) a streaming dataflow architecture that
fuses floating-point element-wise operations with linear computations to hide
latency; (4) a reversed-reordered prefill stage attention with fused attention
operations for high memory efficiency; and (5) a resource-efficient specialized
decoding stage attention. Under a 5~W power budget, TeLLMe delivers up to
25~tokens/s decoding throughput and 0.45--0.96~s time-to-first-token (TTFT) for
64--128 token prompts, marking a significant energy-efficiency advancement in
LLM inference on edge FPGAs.
Authors (5)
Ye Qiao
Zhiheng Chen
Yifan Zhang
Yian Wang
Sitao Huang
Submitted
October 3, 2025
Key Contributions
This paper presents TeLLMe, the first table-lookup-based ternary LLM accelerator for low-power edge FPGAs that fully supports both prefill and autoregressive decoding using extremely low-bit weights (1.58-bit). It introduces novel techniques like TLMM with grouped activations and online precomputation to achieve low resource utilization and high throughput, addressing the critical challenges of deploying LLMs on resource-constrained edge devices.
Business Value
Enables the deployment of powerful LLMs on a wide range of edge devices, unlocking new applications in areas like on-device AI assistants, real-time translation, and intelligent sensors.