Redirecting to original paper in 30 seconds...
Click below to go immediately or wait for automatic redirect
📄 Abstract
Abstract: Event cameras, with microsecond temporal resolution and high dynamic range
(HDR) characteristics, emit high-speed event stream for perception tasks.
Despite the recent advancement in GNN-based perception methods, they are prone
to use straightforward pairwise connectivity mechanisms in the pure Euclidean
space where they struggle to capture long-range dependencies and fail to
effectively characterize the inherent hierarchical structures of non-uniformly
distributed event stream. To this end, in this paper we propose a novel
approach named EHGCN, which is a pioneer to perceive event stream in both
Euclidean and hyperbolic spaces for event vision. In EHGCN, we introduce an
adaptive sampling strategy to dynamically regulate sampling rates, retaining
discriminative events while attenuating chaotic noise. Then we present a Markov
Vector Field (MVF)-driven motion-aware hyperedge generation method based on
motion state transition probabilities, thereby eliminating cross-target
spurious associations and providing critically topological priors while
capturing long-range dependencies between events. Finally, we propose a
Euclidean-Hyperbolic GCN to fuse the information locally aggregated and
globally hierarchically modeled in Euclidean and hyperbolic spaces,
respectively, to achieve hybrid event perception. Experimental results on event
perception tasks such as object detection and recognition validate the
effectiveness of our approach.