Gemel: Model Merging for Memory-Efficient, Real-Time Video Analytics at the Edge

Picture of Arthi Padmanabhan
Arthi Padmanabhan
Picture of Neil Agarwal
Neil Agarwal
Picture of Ganesh Ananthanarayanan
Ganesh Ananthanarayanan
Picture of Yuanchao Shu
Yuanchao Shu
Picture of Nikolaos Karianakis
Nikolaos Karianakis
Picture of Guoqing Harry Xu
Guoqing Harry Xu
Picture of Ravi Netravali
Ravi Netravali
Published at USENIX NSDI 2023

Abstract

Video analytics pipelines have steadily shifted to edge deployments to reduce bandwidth overheads and privacy violations, but in doing so, face an ever-growing resource tension. Most notably, edge-box GPUs lack the memory needed to concurrently house the growing number of (increasingly complex) models for real-time inference. Unfortunately, existing solutions that rely on time/space sharing of GPU resources are insufficient as the required swapping delays result in unacceptable frame drops and accuracy loss. We present model merging, a new memory management technique that exploits architectural similarities between edge vision models by judiciously sharing their layers (including weights) to reduce workload memory costs and swapping delays. Our system, Gemel, efficiently integrates merging into existing pipelines by (1) leveraging several guiding observations about per-model memory usage and inter-layer dependencies to quickly identify fruitful and accuracy-preserving merging configurations, and (2) altering edge inference schedules to maximize merging benefits. Experiments across diverse workloads reveal that Gemel reduces memory usage by up to 60.7%, and improves overall accuracy by 8-39% relative to time or space sharing alone.

Materials